Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
15,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Layers of the Earth
Students will analyze the following information and answer questions a long the way in order to learn the different layers of the earth.
Through BBC's website (provided below), students can find further information about the topic of Layers of the Earth
http
Step1: Questions
Where are earthquakes most likely to occur?
What is the name for movements between plates?
Using the data above, which earthquake caused had a stronger impact?
Using the link provided above, analyze the table given and identify each locations magnitude type along with its range.
Temperature and Pressure
Earth's internal temperature increases with depth. This increase rate is not linear, though. As shown in the data below, the temperature increases quickly through the crust at about 20°C per kilometer. The temperature then increases more slowly as we descend through the mantle, sharply increases at the base of the mantle, and then increases slowly through the core. The temperature is around 1000°C at the base of the crust, around 3500°C at the base of the mantle, and around 5,000°C at Earth’s center. | Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
# Read in data that will be used for answering some of the questions below
data = pd.read_csv("./significant_month.csv")
# Observe the first 5 rows of the data provided below
data.head(4)
Explanation: Layers of the Earth
Students will analyze the following information and answer questions a long the way in order to learn the different layers of the earth.
Through BBC's website (provided below), students can find further information about the topic of Layers of the Earth
http://www.bbc.co.uk/schools/gcsebitesize/geography/natural_hazards/tectonic_plates_rev1.shtml
The Different Layers
There are four different layers of the earth. Information for this section can be read through Annenberg Learner's website: https://www.learner.org/interactives/dynamicearth/structure.html
Use the following image to answer the questions bellow
<img src='https://classroom.therefugeecenter.org/wp-content/uploads/2016/04/Pic22.jpg'>
Questions
Name the four layers of the Earth in order.
What are the names of the two regions of the Mantle?
How deep is the Earth's crust?
What materials make up the inner core?
Composition and Density
The following link contains information written by Eugene C. Robertson from USGS.gov, which will help you understand this concept some more https://pubs.usgs.gov/gip/interior/
Use the following table to answer the questions below.
<img src='https://apionline.sodapdf.com/Public/widgets/convertmyimage/download/density.jpg'>
Questions
What is the approximate density in grams per cubic centimeter of the earth at 1600km below sea level?
Why do you think the density of the earth increases as we go further down?
What kind of rocks are found in the thickest layer of the Earth?
Name the first internal structural element to be identified.
Earthquakes
The Crust layer of the Earth may be broken into pieces identified as plates. When such plates move,they can produce an earthquake and/or volcano.
To learn the abriviations used for this chart, visit USGS at https://earthquake.usgs.gov/earthquakes/feed/v1.0/csv.php
End of explanation
# Read the data that will be used for answering the questions below.
data = pd.read_csv("./temp.csv")
# Observe the first 20 rows and plot provided for you below.
data.head(20)
fig = plt.figure(figsize=(10,3))
plt.plot(data['Depth in km'],data['Temperature in C'])
plt.ylabel('Temperature in C')
plt.xlabel('Depth in km')
plt.annotate('Lithosphere', xy=(0,0), xytext=(90,0))
plt.annotate('Asthenosphere', xy=(100,1100), xytext=(500,1100),
arrowprops=dict(facecolor='yellow', shrink=0.02))
plt.annotate('Mantle', xy=(400,1900), xytext=(1000,1900),
arrowprops=dict(facecolor='green',shrink=0.1))
plt.annotate('Outer Core', xy=(2800,3840), xytext=(3200,3840),
arrowprops=dict(facecolor='orange'))
plt.annotate('Inner Core', xy=(5100,5400), xytext=(5100,4300),
arrowprops=dict(facecolor='red'))
# Note: the arrows on the graph point out where the specific layer begins
# Observe the following graph, it will also be needed to answer the questions below.
data = pd.read_csv("./Pressure_Data.csv")
# Note: 1 on the y-axis is equal to 1 x 10^6.
fig = plt.figure(figsize=(10,3))
plt.plot(data['Depth in km'],data['Pressure in bar'], 'r--')
plt.ylabel('Pressure [10^6bar]')
plt.xlabel('Depth [km]')
plt.annotate('Mantle', xy=(1500,1.5))
plt.annotate('Core', xy=(4800,1.0))
Explanation: Questions
Where are earthquakes most likely to occur?
What is the name for movements between plates?
Using the data above, which earthquake caused had a stronger impact?
Using the link provided above, analyze the table given and identify each locations magnitude type along with its range.
Temperature and Pressure
Earth's internal temperature increases with depth. This increase rate is not linear, though. As shown in the data below, the temperature increases quickly through the crust at about 20°C per kilometer. The temperature then increases more slowly as we descend through the mantle, sharply increases at the base of the mantle, and then increases slowly through the core. The temperature is around 1000°C at the base of the crust, around 3500°C at the base of the mantle, and around 5,000°C at Earth’s center.
End of explanation |
15,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 2
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS | Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
Explanation: Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
conn.rollback()
Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
cursor = conn.cursor()
statement = "SELECT movie_title FROM uitem WHERE scifi = 1 AND horror = 1 ORDER BY release_date DESC"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
cursor = conn.cursor()
statement = "SELECT COUNT(*) FROM uitem WHERE musical = 1 OR childrens = 1"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
cursor = conn.cursor()
statement = "SELECT DISTINCT(occupation), COUNT(*) FROM uuser GROUP BY occupation HAVING COUNT(*) > 50"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
cursor = conn.cursor()
statement = "SELECT DISTINCT(movie_title) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE EXTRACT(YEAR FROM release_date) < 1992 AND rating = 5 GROUP BY movie_title"
#TA-STEPHAN: Try using this statement
#statement = "SELECT DISTINCT uitem.movie_title, udata.rating FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE documentary = 1 AND udata.rating = 5 AND uitem.release_date < '1992-01-01';"
# if "any" has to be taken in the sense of "every":
# statement = "SELECT movie_title FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE EXTRACT(YEAR FROM release_date) < 1992 GROUP BY movie_title HAVING MIN(rating) = 5"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
conn.rollback()
cursor = conn.cursor()
statement = "SELECT movie_title), AVG(rating) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE horror = 1 GROUP BY movie_title ORDER BY AVG(rating) LIMIT 10"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
cursor = conn.cursor()
statement = "SELECT movie_title, AVG(rating) FROM udata JOIN uitem ON uitem.movie_id = udata.item_id WHERE horror = 1 GROUP BY movie_title HAVING COUNT(rating) > 10 ORDER BY AVG(rating) LIMIT 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation |
15,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
基于词向量的英汉翻译——“火炬上的深度学习"下第一次作业
在这个作业中,你需要半独立地完成一个英文到中文的单词翻译器
本文件是集智AI学园http
Step1: 第一步:加载词向量
首先,让我们加载别人已经在大型语料库上训练好的词向量
Step2: 第二步:可视化同一组意思词在两种不同语言的词向量中的相互位置关系
Step3: 结论:可以看出,中文的一、二、等数字彼此之间的关系与英文的数字彼此之间的关系很类似
第三步:训练一个神经网络,输入一个英文单词的词向量,输出一个中文的词向量,并翻译为中文
首先,读入一个已经建立好的词典(dictionary.txt)。本词典是老师调用百度翻译的API,自动将一篇英文小说中的词汇逐个翻译为中文而得来的
我们一个个地载入词典,并查找对应的中文词向量,如果找得到,则放入original_words中,做为正式的训练集 | Python Code:
# 加载必要的程序包
# PyTorch的程序包
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# 数值运算和绘图的程序包
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# 加载机器学习的软件包,主要为了词向量的二维可视化
from sklearn.decomposition import PCA
#加载Word2Vec的软件包
import gensim as gensim
from gensim.models import Word2Vec
from gensim.models.keyedvectors import KeyedVectors
from gensim.models.word2vec import LineSentence
#加载正则表达式处理的包
import re
#在Notebook界面能够直接显示图形
%matplotlib inline
Explanation: 基于词向量的英汉翻译——“火炬上的深度学习"下第一次作业
在这个作业中,你需要半独立地完成一个英文到中文的单词翻译器
本文件是集智AI学园http://campus.swarma.org 出品的“火炬上的深度学习”第VI课的配套源代码
End of explanation
# 加载中文词向量,下载地址为:链接:http://pan.baidu.com/s/1gePQAun 密码:kvtg
# 该中文词向量库是由尹相志提供,训练语料来源为:微博、人民日报、上海热线、汽车之家等,包含1366130个词向量
word_vectors = KeyedVectors.load_word2vec_format('vectors.bin', binary=True, unicode_errors='ignore')
len(word_vectors.vocab)
# 加载中文的词向量,下载地址为:http://nlp.stanford.edu/data/glove.6B.zip,解压后将glove.6B.100d.txt文件拷贝到与本notebook
# 文件一致的文件夹洗面。
f = open('glove.6B.100d.txt', 'r')
i = 1
# 将英文的词向量都存入如下的字典中
word_vectors_en = {}
with open('glove.6B.100d.txt') as f:
for line in f:
numbers = line.split()
word = numbers[0]
vectors = np.array([float(i) for i in numbers[1 : ]])
word_vectors_en[word] = vectors
i += 1
print(len(word_vectors_en))
Explanation: 第一步:加载词向量
首先,让我们加载别人已经在大型语料库上训练好的词向量
End of explanation
# 中文的一二三四五列表
cn_list = {'一', '二', '三', '四', '五', '六', '七', '八', '九', '零'}
# 阿拉伯数字的12345列表
en_list = {'1', '2', '3', '4', '5', '6', '7', '8', '9', '0'}
# 英文数字的列表
en_list = {'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'zero'}
# 对应词向量都存入到列表中
cn_vectors = [] #中文的词向量列表
en_vectors = [] #英文的词向量列表
for w in cn_list:
cn_vectors.append(word_vectors[w])
for w in en_list:
en_vectors.append(word_vectors_en[w])
# 将这些词向量统一转化为矩阵
cn_vectors = np.array(cn_vectors)
en_vectors = np.array(en_vectors)
# 降维实现可视化
X_reduced = PCA(n_components=2).fit_transform(cn_vectors)
Y_reduced = PCA(n_components = 2).fit_transform(en_vectors)
# 绘制所有单词向量的二维空间投影
f, (ax1, ax2) = plt.subplots(1, 2, figsize = (10, 8))
ax1.plot(X_reduced[:, 0], X_reduced[:, 1], 'o')
ax2.plot(Y_reduced[:, 0], Y_reduced[:, 1], 'o')
zhfont1 = matplotlib.font_manager.FontProperties(fname='/home/fuyang/.fonts/YaHei.Consolas.1.11b.ttf', size=16)
for i, w in enumerate(cn_list):
ax1.text(X_reduced[i, 0], X_reduced[i, 1], w, fontproperties = zhfont1, alpha = 1)
for i, w in enumerate(en_list):
ax2.text(Y_reduced[i, 0], Y_reduced[i, 1], w, alpha = 1)
Explanation: 第二步:可视化同一组意思词在两种不同语言的词向量中的相互位置关系
End of explanation
original_words = []
with open('dictionary.txt', 'r') as f:
dataset = []
for line in f:
itm = line.split('\t')
eng = itm[0]
chn = itm[1].strip()
if eng in word_vectors_en and chn in word_vectors:
data = word_vectors_en[eng]
target = word_vectors[chn]
# 将中英文词对做成数据集
dataset.append([data, target])
original_words.append([eng, chn])
print(len(dataset)) # 共有4962个单词做为总的数据集合
# 建立训练集、测试集和校验集
# 训练集用来训练神经网络,更改网络的参数;校验集用来判断网络模型是否过拟合:当校验集的损失数值超过训练集的时候,即为过拟合
# 测试集用来检验模型的好坏
indx = np.random.permutation(range(len(dataset)))
dataset = [dataset[i] for i in indx]
original_words = [original_words[i] for i in indx]
train_size = 500
train_data = dataset[train_size:]
valid_data = dataset[train_size // 2 : train_size]
test_data = dataset[: train_size // 2]
test_words = original_words[: train_size // 2]
print(len(train_data), len(valid_data), len(test_data))
# 开始训练一个多层神经网络,将一个100维度的英文向量映射为200维度的中文词向量,隐含层节点为30
input_size = 100
output_size = 200
hidden_size = 30
# 新建一个神经网络,包含一个隐含层
model = nn.Sequential(nn.Linear(input_size, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, output_size)
)
print(model)
# 构造损失函数
criterion = torch.nn.MSELoss()
# 构造优化器
optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)
# 总的循环周期
num_epoch = 100
#开始训练500次,每次对所有的数据都做循环
results = []
for epoch in range(num_epoch):
train_loss = []
for data in train_data:
# 读入数据
x = Variable(torch.FloatTensor(data[0])).unsqueeze(0)
y = Variable(torch.FloatTensor(data[1])).unsqueeze(0)
# 模型预测
output = model(x)
# 反向传播算法训练
optimizer.zero_grad()
loss = criterion(output, y)
train_loss.append(loss.data.numpy()[0])
loss.backward()
optimizer.step()
# 在校验集上测试一下效果
valid_loss = []
for data in valid_data:
x = Variable(torch.FloatTensor(data[0])).unsqueeze(0)
y = Variable(torch.FloatTensor(data[1])).unsqueeze(0)
output = model(x)
loss = criterion(output, y)
valid_loss.append(loss.data.numpy()[0])
results.append([np.mean(train_loss), np.mean(valid_loss)])
print('{}轮,训练Loss: {:.2f}, 校验Loss: {:.2f}'.format(epoch, np.mean(train_loss), np.mean(valid_loss)))
# 绘制图形
a = [i[0] for i in results]
b = [i[1] for i in results]
plt.plot(a, 'o', label = 'Training Loss')
plt.plot(b, 's', label = 'Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss Function')
plt.legend()
# 在测试集上验证准确度
# 检验标准有两个:一个是直接用预测的词和标准答案做全词匹配;另一个是做单字的匹配
exact_same = 0 #全词匹配数量
one_same = 0 #单字匹配数量
results = []
for i, data in enumerate(test_data):
x = Variable(torch.FloatTensor(data[0])).unsqueeze(0)
# 给出模型的输出
output = model(x)
output = output.squeeze().data.numpy()
# 从中文词向量中找到与输出向量最相似的向量
most_similar = word_vectors.wv.similar_by_vector(output, 1)
# 将标准答案中的词与最相似的向量所对应的词打印出来
results.append([original_words[i][1], most_similar[0][0]])
# 全词匹配
if original_words[i][1] == most_similar[0][0]:
exact_same += 1
# 某一个字匹配
if list(set(list(original_words[i][1])) & set(list(most_similar[0][0]))) != []:
one_same += 1
print("精确匹配率:{:.2f}".format(1.0 * exact_same / len(test_data)))
print('一字匹配率:{:.2f}'.format(1.0 * one_same / len(test_data)))
print(results)
Explanation: 结论:可以看出,中文的一、二、等数字彼此之间的关系与英文的数字彼此之间的关系很类似
第三步:训练一个神经网络,输入一个英文单词的词向量,输出一个中文的词向量,并翻译为中文
首先,读入一个已经建立好的词典(dictionary.txt)。本词典是老师调用百度翻译的API,自动将一篇英文小说中的词汇逐个翻译为中文而得来的
我们一个个地载入词典,并查找对应的中文词向量,如果找得到,则放入original_words中,做为正式的训练集
End of explanation |
15,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook 3
Step2: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data.
Project SRA
Step3: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
Step4: This study includes several re-sequenced individuals. We combine them before beginning the analysis.
Step5: Make a params file
Step6: Assemble in pyrad
Step7: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.36M.
Step8: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
Step9: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
Step10: Print final stats table
Step11: Infer ML phylogeny in raxml as an unrooted tree
Step12: Plot the tree in R using ape
Step13: Meaure phylo distance (GTRgamma distance) | Python Code:
### Notebook 3
### Data set 3 (American oaks)
### Authors: Eaton et al. (2015)
### Data Location: NCBI SRA SRP055977
Explanation: Notebook 3:
This is an IPython notebook. Most of the code is composed of bash scripts, indicated by %%bash at the top of the cell, otherwise it is IPython code. This notebook includes code to download, assemble and analyze a published RADseq data set.
End of explanation
%%bash
## make a new directory for this analysis
mkdir -p empirical_3/fastq/
## IPython code
import pandas as pd
import urllib2
import os
## open the SRA run table from github url
url = "https://raw.githubusercontent.com/"+\
"dereneaton/RADmissing/master/empirical_3_SraRunTable.txt"
intable = urllib2.urlopen(url)
indata = pd.read_table(intable, sep="\t")
## print first few rows
print indata.head()
def wget_download(SRR, outdir, outname):
Python function to get sra data from ncbi and write to
outdir with a new name using bash call wget
## get output name
output = os.path.join(outdir, outname+".sra")
## create a call string
call = "wget -q -r -nH --cut-dirs=9 -O "+output+" "+\
"ftp://ftp-trace.ncbi.nlm.nih.gov/"+\
"sra/sra-instant/reads/ByRun/sra/SRR/"+\
"{}/{}/{}.sra;".format(SRR[:6], SRR, SRR)
## call bash script
! $call
Explanation: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data.
Project SRA: SRP055977
BioProject ID: PRJNA277574
Biosample numbers: SAMN03394519 - SAMN03394561
Runs: SRR1915524 -- SRR1915566
SRA link: http://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP055977
End of explanation
for ID, SRR in zip(indata.Library_Name_s, indata.Run_s):
wget_download(SRR, "empirical_3/fastq/", ID)
%%bash
## convert sra files to fastq using fastq-dump tool
## output as gzipped into the fastq directory
fastq-dump --gzip -O empirical_3/fastq/ empirical_3/fastq/*.sra
## remove .sra files
rm empirical_3/fastq/*.sra
Explanation: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
End of explanation
##IPython code
import glob
taxa = [i.split("/")[-1].split('_')[0] for i in glob.glob("empirical_3/fastq/*.gz")]
for taxon in set(taxa):
if taxa.count(taxon) > 1:
print taxon, "merged"
## merge replicate files
! cat empirical_3/fastq/$taxon\_v.fastq.gz \
empirical_3/fastq/$taxon\_re.fastq.gz \
> empirical_3/fastq/$taxon\_me.fastq.gz
## remove ind replicate files
! rm empirical_3/fastq/$taxon\_v.fastq.gz
! rm empirical_3/fastq/$taxon\_re.fastq.gz
Explanation: This study includes several re-sequenced individuals. We combine them before beginning the analysis.
End of explanation
%%bash
pyrad --version
%%bash
## remove old params file if it exists
rm params.txt
## create a new default params file
pyrad -n
%%bash
## substitute new parameters into file
sed -i '/## 1. /c\empirical_3/ ## 1. working directory ' params.txt
sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt
sed -i '/## 7. /c\20 ## 7. N processors ' params.txt
sed -i '/## 9. /c\6 ## 9. NQual ' params.txt
sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt
sed -i '/## 12./c\4 ## 12. MinCov ' params.txt
sed -i '/## 13./c\10 ## 13. maxSH ' params.txt
sed -i '/## 14./c\empirical_3_m4 ## 14. output name ' params.txt
sed -i '/## 18./c\empirical_3/fastq/*.gz ## 18. data location ' params.txt
sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt
sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt
cat params.txt
Explanation: Make a params file
End of explanation
%%bash
pyrad -p params.txt -s 234567 >> log.txt 2>&1
%%bash
sed -i '/## 12./c\2 ## 12. MinCov ' params.txt
sed -i '/## 14./c\empirical_3_m2 ## 14. output name ' params.txt
%%bash
pyrad -p params.txt -s 7 >> log.txt 2>&1
Explanation: Assemble in pyrad
End of explanation
## read in the data
s2dat = pd.read_table("empirical_3/stats/s2.rawedit.txt", header=0, nrows=36)
## print summary stats
print s2dat["passed.total"].describe()
## find which sample has the most raw data
maxraw = s2dat["passed.total"].max()
print "\nmost raw data in sample:"
print s2dat['sample '][s2dat['passed.total']==maxraw]
Explanation: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.36M.
End of explanation
## read in the s3 results
s3dat = pd.read_table("empirical_3/stats/s3.clusters.txt", header=0, nrows=39)
## print summary stats
print "summary of means\n=================="
print s3dat['dpt.me'].describe()
## print summary stats
print "\nsummary of std\n=================="
print s3dat['dpt.sd'].describe()
## print summary stats
print "\nsummary of proportion lowdepth\n=================="
print pd.Series(1-s3dat['d>5.tot']/s3dat["total"]).describe()
## find which sample has the greatest depth of retained loci
max_hiprop = (s3dat["d>5.tot"]/s3dat["total"]).max()
print "\nhighest coverage in sample:"
print s3dat['taxa'][s3dat['d>5.tot']/s3dat["total"]==max_hiprop]
import numpy as np
## print mean and std of coverage for the highest coverage sample
with open("empirical_3/clust.85/AR_re.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
print depths.mean(), depths.std()
Explanation: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
End of explanation
import toyplot
import toyplot.svg
import numpy as np
## read in the depth information for this sample
with open("empirical_3/clust.85/AR_re.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
## make a barplot in Toyplot
canvas = toyplot.Canvas(width=350, height=300)
axes = canvas.axes(xlabel="Depth of coverage (N reads)",
ylabel="N loci",
label="dataset3/sample=AR_re")
## select the loci with depth > 5 (kept)
keeps = depths[depths>5]
## plot kept and discarded loci
edat = np.histogram(depths, range(30)) # density=True)
kdat = np.histogram(keeps, range(30)) #, density=True)
axes.bars(edat)
axes.bars(kdat)
#toyplot.svg.render(canvas, "empirical_3_depthplot.svg")
Explanation: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
End of explanation
cat empirical_3/stats/empirical_3_m4.stats
%%bash
head -n 10 empirical_3/stats/empirical_3_m2.stats
Explanation: Print final stats table
End of explanation
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_3/ \
-n empirical_3_m4 -s empirical_3/outfiles/empirical_3_m4.phy
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_3/ \
-n empirical_3_m2 -s empirical_3/outfiles/empirical_3_m2.phy
%%bash
head -n 20 empirical_3/RAxML_info.empirical_3_m4
%%bash
head -n 20 empirical_3/RAxML_info.empirical_3_m2
Explanation: Infer ML phylogeny in raxml as an unrooted tree
End of explanation
%load_ext rpy2.ipython
%%R -w 400 -h 800
library(ape)
tre <- read.tree("empirical_3/RAxML_bipartitions.empirical_3")
ltre <- ladderize(tre)
plot(ltre, edge.width=2)
Explanation: Plot the tree in R using ape
End of explanation
%%R
mean(cophenetic.phylo(ltre))
Explanation: Meaure phylo distance (GTRgamma distance)
End of explanation |
15,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pipeline
Cleaning confounds
We first created the confound matrix according to Smith et al. (2015). The confound variables are motion (Jenkinson), sex, and age. We also created squared confound measures to help account for potentially nonlinear effects of these confounds.
Nested k-fold cross validation
We employed the nested approach to accomandate the hyper-parameter selection and model selection. This is a complex and costly method but the smaple size allows us to use this approach.
Step1: confound cleaning in CV loops
Step2: Examing the explained variance %
Step3: Permutation test with data augmentation
After data decomposition with SCCA, one way to access the reliability of the canonical component is permutation test. The purpose of a permutation test is to construct a null distribution of the target matrice to access the confidence level of our discovery. The target matrix should be the the statistics optimisation goal in the original model. In SCCA, the canonical correlations are used to construct the null distribution. There are two possible ways to permute the data - scarmbling the subjuect-wise or variable-wise links. To perform subject-wise scrambling, you shuffle one dataset by row, so each observation will have non-matching variables. This permutation scheme access the significance of the individual information to the model. We can, otherwise, shuffle the order of the variable for each participant to disturb the variable property, hence it can access the contribution of variable profile to the modeling results. We adopt the permutation test with the FWE-corrected p-value in the Smith et al. 2015 paper with data arugmentaion to increase the size of the resampling datasets. | Python Code:
import copy
import os, sys
import numpy as np
import pandas as pd
import joblib
os.chdir('../')
# loa my modules
from src.utils import load_pkl
from src.file_io import save_output
from src.models import nested_kfold_cv_scca, clean_confound, permutate_scca
from src.visualise import set_text_size, show_results, write_pdf, write_png
dat_path = './data/processed/dict_SCCA_data_prepro_revision1.pkl'
# load data
dataset = load_pkl(dat_path)
dataset.keys()
FC_nodes = dataset['FC_nodes']
MRIQ = dataset['MRIQ']
mot = dataset['Motion_Jenkinson']
sex = dataset['Gender']
age = dataset['Age']
confound_raw = np.hstack((mot, sex, age))
out_folds = 5
in_folds = 5
n_selected = 4
Explanation: Pipeline
Cleaning confounds
We first created the confound matrix according to Smith et al. (2015). The confound variables are motion (Jenkinson), sex, and age. We also created squared confound measures to help account for potentially nonlinear effects of these confounds.
Nested k-fold cross validation
We employed the nested approach to accomandate the hyper-parameter selection and model selection. This is a complex and costly method but the smaple size allows us to use this approach.
End of explanation
%%time
para_search, best_model, pred_errors = nested_kfold_cv_scca(
FC_nodes, MRIQ, R=confound_raw, n_selected=n_selected,
out_folds=5, in_folds=5,
reg_X=(0.1, 0.9), reg_Y=(0.1, 0.9)
)
Explanation: confound cleaning in CV loops
End of explanation
X, Y, R = clean_confound(FC_nodes, MRIQ, confound_raw)
from sklearn.linear_model import LinearRegression
from scipy.stats.mstats import zscore
lr = LinearRegression(fit_intercept=False)
lr.fit(R, np.arctanh(FC_nodes))
rec_ = lr.coef_.dot(R.T).T
r_2 = 1 - (np.var(np.arctanh(FC_nodes) - rec_) / np.var(np.arctanh(FC_nodes)))
print "confounds explained {}% of the FC data".format(np.round(r_2 * 100), 0)
lr = LinearRegression(fit_intercept=False)
lr.fit(R, zscore(MRIQ))
rec_ = lr.coef_.dot(R.T).T
r_2 = 1 - (np.var(zscore(MRIQ) - rec_) / np.var(zscore(MRIQ)))
print "confounds explained {}% of the self-report data".format(np.round(r_2 * 100), 0)
Explanation: Examing the explained variance %
End of explanation
%%time
df_permute = permutate_scca(X, Y, best_model.cancorr_, best_model, n_permute=5000)
df_permute
u, v = best_model.u, best_model.v
set_text_size(12)
figs = show_results(u, v, range(1,58), dataset['MRIQ_labels'], rank_v=True, sparse=True)
write_png('./reports/revision/bestModel_yeo7nodes_component_{:}.png', figs)
X_scores, Y_scores, df_z = save_output(dataset, confound_raw, best_model, X, Y, path=None)
df_z.to_csv('./data/processed/NYCQ_CCA_score_revision_yeo7nodes_{0:1d}_{1:.1f}_{2:.1f}.csv'.format(
best_model.n_components, best_model.penX, best_model.penY))
df_z.to_pickle('./data/processed/NYCQ_CCA_score_revision_yeo7nodes_{0:1d}_{1:.1f}_{2:.1f}.pkl'.format(
best_model.n_components, best_model.penX, best_model.penY))
joblib.dump(best_model,
'./models/SCCA_Yeo7nodes_revision_{:1d}_{:.2f}_{:.2f}.pkl'.format(
best_model.n_components, best_model.penX, best_model.penY))
Explanation: Permutation test with data augmentation
After data decomposition with SCCA, one way to access the reliability of the canonical component is permutation test. The purpose of a permutation test is to construct a null distribution of the target matrice to access the confidence level of our discovery. The target matrix should be the the statistics optimisation goal in the original model. In SCCA, the canonical correlations are used to construct the null distribution. There are two possible ways to permute the data - scarmbling the subjuect-wise or variable-wise links. To perform subject-wise scrambling, you shuffle one dataset by row, so each observation will have non-matching variables. This permutation scheme access the significance of the individual information to the model. We can, otherwise, shuffle the order of the variable for each participant to disturb the variable property, hence it can access the contribution of variable profile to the modeling results. We adopt the permutation test with the FWE-corrected p-value in the Smith et al. 2015 paper with data arugmentaion to increase the size of the resampling datasets.
End of explanation |
15,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Lecture 3
Step2: <p class='alert alert-success'>
Solve the questions in green blocks. Save the file as ME249-Lecture-3-YOURNAME.ipynb and change YOURNAME in the bottom cell. Send me and the grader the <b>html</b> file not the ipynb file.
</p>
<h1>Discrete Fourier Series</h1>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The longest wavelength wave that can be contained in the domain is $L_x$. A phyiscal understanding of Fourier series is the representation of a system as the sum of many waves fo wavelengths smaller or equal to $L_x$. In a discrete sense, the series of wave used to decompose the system is defined as
Step3: First, let's perform a sanity check, i.e.
$$
u=FT^{-1}\left[FT\left[u\right]\right]
$$
where $FT$ designates the Fourier transform and $FT^{-1}$ its inverse.
Step4: <h2>Spectrum</h2>
For now we will define the spectrum $\Phi_u$ as
<p class='alert alert-danger'>
$$
\Phi_u(k_n) = \hat{u}_n.\hat{u}_n^*
$$
</p>
which can be interpreted as the energy contained in the $k_n$ wavenumber. This is helpful when searching for the most energetic scales or waves in our system. Thanks to the symmetries of the FFT, the spectrum is defined over $n=0$ to $N_x/2$
Step5: <h2>Low-Pass Filter</h2>
The following code filters the original signal by half the wavenumbers using FFT and compares to exact filtered function
Step6: <h2> High-Pass Filter</h2>
<p class='alert alert-success'>
From the example below, develop a function for a high-pass filter.
</p>
Step7: <h1>Derivation in Fourier Space</h1>
Going back to the Fourier series,
$$
u(x) = \sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}\frac{2\pi nx}{L_x}\right)=\sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}k_nx\right)
$$
with
$$
k_n=\frac{2\pi n}{L_x}\,
$$
it is obvious that any $m$ derivative of the real variable $u$ be
Step8: <h1> Comparison with Finite Difference Derivatives</h1>
When the number of Fourier node is sufficient to capture all scales, derivatives computed in the Fourier space are essentially exact. The following code compares the exact first derivative with a first order upwind scheme
Step9: The error is large which is compounded by the fact that <FONT FACE="courier" style="color
Step10: <p class='alert alert-success'>
Which scales are the most affected by the finite difference scheme? What effect do you observe?
</p>
<h1>Modified Wavenumber</h1>
Starting from,
$$
f(x) = \sum_{k=-N/2+1}^{N/2}\hat{f}k\exp\left(\hat{\jmath}\frac{2\pi kx}{L_x}\right)
$$
and introducing the wavenumber
$$
\omega = \frac{2\pi k}{L_x}
$$
allows to reduce the Fourier expansion to
$$
f(x) = \sum{\omega=-\pi}^{\pi}\hat{f}\omega e^{\hat{\jmath}\omega x}
$$
The derivative of $f$ becomes
$$
f'(x) = \sum{\omega=-\pi}^{\pi}\hat{f'}\omega e^{\hat{\jmath}\omega x}
$$
where $\hat{f'}\omega$ is only a notation, not the derivative of the Fourier coefficient
Step11: First, let's verify that the derivation of the modified wavenumber is correct.
Step12: <p class='alert alert-success'>
- Explain why $\omega\Delta x$ can be defined from $0$ to $\pi$.
</p>
<p class='alert alert-success'>
- Write a code to illustrate the effects of the imaginary part and real part on the derivative on the following function
$$
f(x) = cos(nx)
$$
defined for $x\in[0,2\pi]$, discretized with $N$ points. Study a few $n$, ranging from large scales to small scales. Note the two effects we seek to identify are phase change and amplitude change.
</p>
<p class='alert alert-success'>
- Derive the modified wavenumber for the second order central finite difference scheme
$$
\frac{\delta f}{\delta x}=\frac{f_{i+1}-f_{i-1}}{2\Delta x}
$$
</p>
<p class='alert alert-success'>
- Create a second order upwind scheme and derive the modified wavenumber. Compare the performance of the first order and the second order schemes. For the second order upwind scheme, find $a$, $b$ and $c$ such that
$$
\frac{\delta f}{\delta x}=\frac{af_{i-2}+bf_{i-1}+cf_i}{\Delta x}
$$
</p> | Python Code:
%matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
from IPython.display import Image
from IPython.core.display import HTML
def header(text):
raw_html = '<h4>' + str(text) + '</h4>'
return raw_html
def box(text):
raw_html = '<div style="border:1px dotted black;padding:2em;">'+str(text)+'</div>'
return HTML(raw_html)
def nobox(text):
raw_html = '<p>'+str(text)+'</p>'
return HTML(raw_html)
def addContent(raw_html):
global htmlContent
htmlContent += raw_html
class PDF(object):
def __init__(self, pdf, size=(200,200)):
self.pdf = pdf
self.size = size
def _repr_html_(self):
return '<iframe src={0} width={1[0]} height={1[1]}></iframe>'.format(self.pdf, self.size)
def _repr_latex_(self):
return r'\includegraphics[width=1.0\textwidth]{{{0}}}'.format(self.pdf)
class ListTable(list):
Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook.
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
font = {'family' : 'serif',
'color' : 'black',
'weight' : 'normal',
'size' : 18,
}
Explanation: Lecture 3: Accuracy in Fourier's Space
End of explanation
import matplotlib.pyplot as plt
import numpy as np
Lx = 2.*np.pi
Nx = 256
u = np.zeros(Nx,dtype='float64')
du = np.zeros(Nx,dtype='float64')
ddu = np.zeros(Nx,dtype='float64')
k_0 = 2.*np.pi/Lx
x = np.linspace(Lx/Nx,Lx,Nx)
Nwave = 32
uwave = np.zeros((Nx,Nwave),dtype='float64')
duwave = np.zeros((Nx,Nwave),dtype='float64')
dduwave = np.zeros((Nx,Nwave),dtype='float64')
ampwave = np.random.random(Nwave)
phasewave = np.random.random(Nwave)*2*np.pi
for iwave in range(Nwave):
uwave[:,iwave] = ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
duwave[:,iwave] = -k_0*iwave*ampwave[iwave]*np.sin(k_0*iwave*x+phasewave[iwave])
dduwave[:,iwave] = -(k_0*iwave)**2*ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
u = np.sum(uwave,axis=1)
du = np.sum(duwave,axis=1)
ddu = np.sum(dduwave,axis=1)
#print(u)
plt.plot(x,u,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
plt.plot(x,du,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$du/dx$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
plt.plot(x,ddu,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$d^2u/dx^2$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
Explanation: <p class='alert alert-success'>
Solve the questions in green blocks. Save the file as ME249-Lecture-3-YOURNAME.ipynb and change YOURNAME in the bottom cell. Send me and the grader the <b>html</b> file not the ipynb file.
</p>
<h1>Discrete Fourier Series</h1>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The longest wavelength wave that can be contained in the domain is $L_x$. A phyiscal understanding of Fourier series is the representation of a system as the sum of many waves fo wavelengths smaller or equal to $L_x$. In a discrete sense, the series of wave used to decompose the system is defined as:
$$
a_n\exp\left(\hat{\jmath}\frac{2\pi n}{Lx}\right)
$$
such that
<p class='alert alert-danger'>
$$
f(x) = \sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}\frac{2\pi nx}{Lx}\right)
$$
</p>
and
<p class='alert alert-danger'>
$$
a_n = \frac{1}{L_x}\int_Lf(x)\exp\left(-\hat{\jmath}\frac{2\pi nx}{Lx}\right)dx
$$
</p>
Here $\hat{\jmath}^2=-1$.Often the reduction to wavenumber is used, where
<p class='alert alert-danger'>
$$
k_n = \frac{2\pi n}{L_x}
$$
</p>
Note that if $x$ is time instead of distance, $L_x$ is a time $T$ and the smallest frequency contained in the domain is $f_0=1/T_0$ and the wavenumber $n$ is $k_n=2\pi f_0n=2\pi f_n$ with $f_n$ for $\vert n\vert >1$ are the higher frequencies.
<h1>Discrete Fourier Transform (DFT)</h1>
In scientific computing we are interested in applying Fourier series on vectors or matrices, containing a integer number of samples. The DFT is the fourier series for the number of samples. DFT functions available in python or any other language only care about the number of samples, therefore the wavenumber is
<p class='alert alert-danger'>
$$
k_n=\frac{2\pi n}{N_x}
$$
</p>
Consider a function $f$ periodic over a domain $0\leq x\leq 2\pi$, discretized by $N_x$ points. The nodal value is $f_i$ located at $x_i=(i+1)\Delta x$ with $\Delta x=L_x/Nx$. The DFT is defined as
<p class='alert alert-danger'>
$$
\hat{f}_k=\sum_{i=0}^{N_x-1}f_i\exp\left(-2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
The inverse DFT is defined as
<p class='alert alert-danger'>
$$
f_i=\sum_{k=0}^{N_x-1}\hat{f}_k\exp\left(2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
<h1>Fast Fourier Transform (FFT)</h1>
Using symmetries, the FFT reduces computational costs and stores in the following way:
<p class='alert alert-danger'>
$$
\hat{f}_k=\sum_{i=-Nx/2+1}^{N_x/2}f_i\exp\left(-2\pi\hat{\jmath}\frac{ik}{N_x}\right)
$$
</p>
<p class='alert alert-info'>
Compared to the Fourier series, DFT or FFT assumes that the system can be accurately captured by a finite number of waves. It is up to the user to ensure that the number of computational points is sufficient to capture the smallest scale, or smallest wavelength or highest frequence. Remember that the function on which FT is applied must be periodic over the domain and the grid spacing must be uniform.
</p>
There are FT algorithms for unevenly space data, but this is beyond the scope of this notebook.
<h1>Filtering</h1>
The following provides examples of low- and high-pass filters based on Fourier transform. A ideal low-(high-) pass filter passes frequencies that are lower than a threshold without attenuation and removes frequencies that are higher than the threshold.
When applied to spatial data (function of $x$ rather than $t$-time), the FT (Fourier Transform) of a variable is function of wavenumbers
$$
k_n=\frac{2\pi n}{L_x}
$$
or wavelengths
$$
\lambda_n=\frac{2\pi}{k_n}
$$
The test function is defined as sum of $N_{wave}$ cosine function:
$$
u(x)=\sum_{n=0}^{N_{wave}}A_n\cos\left(nx+\phi_n\right)
$$
with the following first and second derivatives:
$$
\frac{du}{dx}=\sum_{n=1}^{N_{wave}}-nA_n\sin\left(nx+\phi_n\right)
$$
$$
\frac{d^2u}{dx^2}=\sum_{n=1}^{N_{wave}}-n^2A_n\cos\left(nx+\phi_n\right)
$$
The python code for function u and its derivatives is written below. Here amplitudes $A_n$ and phases $\phi_n$ are chosen randomly of ranges $[0,1]$ and $[0,2\pi]$, respectively.
End of explanation
#check FT^-1(FT(u)) - Sanity check
u_hat = np.fft.fft(u)
v = np.real(np.fft.ifft(u_hat))
plt.plot(x,u,'r-',lw=2,label='$u$')
plt.plot(x,v,'b--',lw=2,label='$FT^{-1}[FT[u]]$')
plt.xlim(0,Lx)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
print('error',np.linalg.norm(u-v,np.inf))
Explanation: First, let's perform a sanity check, i.e.
$$
u=FT^{-1}\left[FT\left[u\right]\right]
$$
where $FT$ designates the Fourier transform and $FT^{-1}$ its inverse.
End of explanation
F = np.zeros(Nx/2+1,dtype='float64')
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0))) #This is how the FFT stores the wavenumbers
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='$\Phi_u$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi_u(k)$', fontdict = font)
plt.xticks(fontsize = 16)
y_ticks = np.logspace(-30,5,8)
plt.yticks(y_ticks,fontsize = 16) #Specify ticks, necessary when increasing font
plt.show()
Explanation: <h2>Spectrum</h2>
For now we will define the spectrum $\Phi_u$ as
<p class='alert alert-danger'>
$$
\Phi_u(k_n) = \hat{u}_n.\hat{u}_n^*
$$
</p>
which can be interpreted as the energy contained in the $k_n$ wavenumber. This is helpful when searching for the most energetic scales or waves in our system. Thanks to the symmetries of the FFT, the spectrum is defined over $n=0$ to $N_x/2$
End of explanation
# filtering the smaller waves
def low_pass_filter_fourier(a,k,kcutoff):
N = a.shape[0]
a_hat = np.fft.fft(u)
filter_mask = np.where(np.abs(k) > kcut)
a_hat[filter_mask] = 0.0 + 0.0j
a_filter = np.real(np.fft.ifft(a_hat))
return a_filter
kcut=Nwave/2+1
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
v = low_pass_filter_fourier(u,k,kcut)
u_filter_exact = np.sum(uwave[:,0:kcut+1],axis=1)
plt.plot(x,v,'r-',lw=2,label='filtered with fft')
plt.plot(x,u_filter_exact,'b--',lw=2,label='filtered (exact)')
plt.plot(x,u,'g:',lw=2,label='original')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
print('error:',np.linalg.norm(v-u_filter_exact,np.inf))
F = np.zeros(Nx/2+1,dtype='float64')
F_filter = np.zeros(Nx/2+1,dtype='float64')
u_hat = np.fft.fft(u)
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
v_hat = np.fft.fft(v)
F_filter = np.real(v_hat[0:Nx/2+1]*np.conj(v_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='$\Phi_u$')
plt.loglog(k[0:Nx/2+1],F_filter,'b-',lw=2,label='$\Phi_v$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi_u(k)$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(y_ticks,fontsize = 16)
plt.show()
Explanation: <h2>Low-Pass Filter</h2>
The following code filters the original signal by half the wavenumbers using FFT and compares to exact filtered function
End of explanation
u_hat = np.fft.fft(u)
kfilter = Nwave/2
k = np.linspace(0,Nx-1,Nx)
filter_mask = np.where((k < kfilter) | (k > Nx-kfilter) )
u_hat[filter_mask] = 0.+0.j
v = np.real(np.fft.ifft(u_hat))
plt.plot(x,v,'r-',lw=2,label='Filtered (FT)')
plt.plot(x,np.sum(uwave[:,kfilter:Nwave+1],axis=1),'b--',lw=2,label='Filtered (exact)')
plt.plot(x,u,'g:',lw=2,label='original')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
F = np.zeros(Nx/2+1,dtype='float64')
F_filter = np.zeros(Nx/2+1,dtype='float64')
u_hat = np.fft.fft(u)
F = np.real(u_hat[0:Nx/2+1]*np.conj(u_hat[0:Nx/2+1]))
v_hat = np.fft.fft(v)
F_filter = np.real(v_hat[0:Nx/2+1]*np.conj(v_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='\Phi_u')
plt.loglog(k[0:Nx/2+1],F_filter,'b-',lw=2,label='\Phi_v')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi(k)$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(y_ticks,fontsize = 16)
plt.show()
Explanation: <h2> High-Pass Filter</h2>
<p class='alert alert-success'>
From the example below, develop a function for a high-pass filter.
</p>
End of explanation
u_hat = np.fft.fft(u)
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
ik = 1j*k
v_hat = ik*u_hat
v = np.real(np.fft.ifft(v_hat))
plt.plot(x,v,'r-',lw=2,label='$FT^{-1}[\hat{\jmath}kFT[u]]$')
plt.plot(x,du,'b--',lw=2,label='exact derivative')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$du/dx$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(v-du))
mk2 = ik*ik
v_hat = mk2*u_hat
v = np.real(np.fft.ifft(v_hat))
plt.plot(x,v,'r-',lw=2,label='$FT^{-1}[-k^2FT[u]]$')
plt.plot(x,ddu,'b--',lw=2,label='exact derivative')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$d^2u/dx^2$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(v-ddu))
Explanation: <h1>Derivation in Fourier Space</h1>
Going back to the Fourier series,
$$
u(x) = \sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}\frac{2\pi nx}{L_x}\right)=\sum_{n=-\infty}^{\infty}a_n\exp\left(\hat{\jmath}k_nx\right)
$$
with
$$
k_n=\frac{2\pi n}{L_x}\,
$$
it is obvious that any $m$ derivative of the real variable $u$ be:
$$
\frac{d^mu}{dx^m} = \sum_{n=-\infty}^{\infty}\left(\hat{\jmath}k_n\right)^ma_n\exp\left(\hat{\jmath}k_nx\right)
$$
In other words, if $u_n$ is defined as:
$$
u_n(x) = a_n\exp\left(\hat{\jmath}k_nx\right)
$$
then
$$
\frac{d^mu_n}{dx^m} = \left(\hat{\jmath}k_n\right)^m u_n\,.
$$
A $m$ derivativation in the Fourier space amounts to the multiplication of each Fourier coefficient $a_n$ by $\left(\hat{\jmath}k_n\right)^m$.
The following is a code for the first derivative of $u$:
End of explanation
du_fd = np.zeros(Nx,dtype='float64')
dx = Lx/Nx
du_fd[1:Nx] = (u[1:Nx]-u[0:Nx-1])/dx
du_fd[0] = (u[0]-u[Nx-1])/dx
plt.plot(x,du_fd,'r-',lw=2,label='FD derivative')
plt.plot(x,du,'b--',lw=2,label='exact derivative')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$du/dx$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(du_fd-du,np.inf))
Explanation: <h1> Comparison with Finite Difference Derivatives</h1>
When the number of Fourier node is sufficient to capture all scales, derivatives computed in the Fourier space are essentially exact. The following code compares the exact first derivative with a first order upwind scheme:
$$
\left.\frac{\delta u}{\delta x}\right\vert_i=\frac{u_i-u_{i-1}}{\Delta x}
$$
For finite difference derivative, we will use the symbol $\delta/\delta x$ rather than $d/dx$.
<p class='alert alert-success'>
Show that the scheme is first order and write the leading term in the truncation error.
</p>
End of explanation
F = np.zeros(Nx/2+1,dtype='float64')
F_fd = np.zeros(Nx/2+1,dtype='float64')
du_hat = np.fft.fft(du)/Nx
F = np.real(du_hat[0:Nx/2+1]*np.conj(du_hat[0:Nx/2+1]))
dv_hat = np.fft.fft(du_fd)/Nx
F_fd = np.real(dv_hat[0:Nx/2+1]*np.conj(dv_hat[0:Nx/2+1]))
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
plt.loglog(k[0:Nx/2+1],F,'r-',lw=2,label='$\Phi_{du/dx}$')
plt.loglog(k[0:Nx/2+1],F_fd,'b-',lw=2,label='$\Phi_{\delta u/\delta x}$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Phi(k)$', fontdict = font)
plt.xticks(fontsize = 16)
plt.ylim(0.1,250)
plt.yticks(fontsize = 16)
plt.show()
print('error:',np.linalg.norm(F[0:Nwave/2]-F_fd[0:Nwave/2],np.inf))
print('error per wavenumber')
plt.loglog(k[0:Nx/2+1],np.abs(F-F_fd),'r-',lw=2)
plt.xlabel('$k$', fontdict = font)
plt.ylabel('$\Vert\Phi(k)-\Phi_{\delta u/\delta x}(k)\Vert$', fontdict = font)
plt.xticks(fontsize = 16)
plt.ylim(1e-5,250)
plt.yticks(fontsize = 16)
plt.show()
Explanation: The error is large which is compounded by the fact that <FONT FACE="courier" style="color:blue">np.linalg.norm</FONT> does not normalize the norm by the number of points. Nonetheless the error is far greater than for the derivative using the Fourier space. To shed some light on the cause for errors, the following code computes the spectrum of the first derivative. The second graph shows the difference in spectral energy between the exact solution and the derivative approximated with finite difference per wavenumber.
<p class='alert alert-info'>
However the best way to visualize the problem with spectra alone is to lower the resolution at the top cell and rerun the whole notebook. Finish the notebook as is and then rerun with $N_x=64$.
</p>
End of explanation
L = np.pi
N = 32
dx = L/N
omega = np.linspace(0,L,N)
omegap_exact = omega
omegap_modified = np.sin(omega)
plt.plot(omega,omegap_exact,'k-',lw=2,label='exact')
plt.plot(omega,omegap_modified,'r-',lw=2,label='1st order upwind')
plt.xlim(0,L)
plt.ylim(0,L)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$\omega\Delta x$', fontdict = font)
plt.ylabel('$\omega_{mod}\Delta x$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
Explanation: <p class='alert alert-success'>
Which scales are the most affected by the finite difference scheme? What effect do you observe?
</p>
<h1>Modified Wavenumber</h1>
Starting from,
$$
f(x) = \sum_{k=-N/2+1}^{N/2}\hat{f}k\exp\left(\hat{\jmath}\frac{2\pi kx}{L_x}\right)
$$
and introducing the wavenumber
$$
\omega = \frac{2\pi k}{L_x}
$$
allows to reduce the Fourier expansion to
$$
f(x) = \sum{\omega=-\pi}^{\pi}\hat{f}\omega e^{\hat{\jmath}\omega x}
$$
The derivative of $f$ becomes
$$
f'(x) = \sum{\omega=-\pi}^{\pi}\hat{f'}\omega e^{\hat{\jmath}\omega x}
$$
where $\hat{f'}\omega$ is only a notation, not the derivative of the Fourier coefficient:
$$
\hat{f'}\omega=\hat{\jmath}\omega\hat{f}\omega
$$
Considering the symmetry of the FT, we restrict the study to $\omega\Delta x\in[0,\pi]$.
Now we want to express the first order finite difference scheme in terms of Fourier coefficients and wavenumbers. The scaled coordinates expression of the scheme is
$$
\frac{\delta f}{\delta x}=\frac{f(x)-f(x-\Delta x)}{\Delta x}=\sum_{\omega=-\pi}^{\pi}\hat{f}\omega\frac{e^{\hat{\jmath}\omega x}-e^{\hat{\jmath}\omega (x-\Delta x)}}{\Delta x}=\sum{\omega=-\pi}^{\pi}\frac{1-e^{-\hat{\jmath}\omega\Delta x}}{\Delta x}\hat{f}\omega e^{\hat{\jmath}\omega x}
$$
We now define a modified wavenumber for the first order upwind scheme:
$$
\hat{\jmath}\omega'=\frac{1-e^{-\hat{\jmath}\omega\Delta x}}{\Delta x}=\frac{1-\cos(-\omega\Delta x)-\hat{\jmath}\sin(-\omega\Delta x)}{\Delta x}
$$
which reduces to
$$
\hat{\jmath}\omega\text{mod}\Delta x= \hat{\jmath}\sin(\omega\Delta x)+\left(\cos(1-\omega\Delta x)\right)
$$
The modified wavenumber is no longer purely imaginary, and even the imaginary part is far from the exact wavenumber, as shown in the following plot.
End of explanation
u_hat = np.fft.fft(u)
k = np.hstack((np.arange(0,Nx/2+1),np.arange(-Nx/2+1,0)))
dx = Lx/Nx
ikm = 1j*np.sin(2.*np.pi/Nx*k)/dx+(1-np.cos(2.*np.pi/Nx*k))/dx
v_hat = ikm*u_hat
dum = np.real(np.fft.ifft(v_hat))
plt.plot(x,dum,'r-',lw=2,label='$FT^{-1}[\hat{\jmath}\omega_{mod}FT[u]]$')
plt.plot(x,du_fd,'b--',lw=2,label='$\delta u/\delta x$')
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.ylabel('$du/dx$',fontsize = 18)
plt.xlabel('$x$',fontsize = 18)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim(0,Lx)
plt.show()
print('error:',np.linalg.norm(du_fd-dum,np.inf))
Explanation: First, let's verify that the derivation of the modified wavenumber is correct.
End of explanation
!ipython nbconvert --to html ME249-Lecture-3-YOURNAME.ipynb
Explanation: <p class='alert alert-success'>
- Explain why $\omega\Delta x$ can be defined from $0$ to $\pi$.
</p>
<p class='alert alert-success'>
- Write a code to illustrate the effects of the imaginary part and real part on the derivative on the following function
$$
f(x) = cos(nx)
$$
defined for $x\in[0,2\pi]$, discretized with $N$ points. Study a few $n$, ranging from large scales to small scales. Note the two effects we seek to identify are phase change and amplitude change.
</p>
<p class='alert alert-success'>
- Derive the modified wavenumber for the second order central finite difference scheme
$$
\frac{\delta f}{\delta x}=\frac{f_{i+1}-f_{i-1}}{2\Delta x}
$$
</p>
<p class='alert alert-success'>
- Create a second order upwind scheme and derive the modified wavenumber. Compare the performance of the first order and the second order schemes. For the second order upwind scheme, find $a$, $b$ and $c$ such that
$$
\frac{\delta f}{\delta x}=\frac{af_{i-2}+bf_{i-1}+cf_i}{\Delta x}
$$
</p>
End of explanation |
15,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Load the data
As a first step we will load a large dataset using dask. If you have followed the setup instructions you will have downloaded a large CSV containing 12 million taxi trips. Let's load this data using dask to create a dataframe ddf
Step2: Create a dataset
In previous sections we have already seen how to declare a set of Points from a pandas DataFrame. Here we do the same for a Dask dataframe passed in with the desired key dimensions
Step3: We could now simply type points, and Bokeh will attempt to display this data as a standard Bokeh plot. Before doing that, however, remember that we have 12 million rows of data, and no current plotting program will handle this well! Instead of letting Bokeh see this data, let's convert it to something far more tractable using the datashader operation. This operation will aggregate the data on a 2D grid, apply shading to assign pixel colors to each bin in this grid, and build an RGB Element (just a fixed-sized image) we can safely display
Step4: If you zoom in you will note that the plot rerenders depending on the zoom level, which allows the full dataset to be explored interactively even though only an image of it is ever sent to the browser. The way this works is that datashade is a dynamic operation that also declares some linked streams. These linked streams are automatically instantiated and dynamically supply the plot size, x_range, and y_range from the Bokeh plot to the operation based on your current viewport as you zoom or pan
Step5: Adding a tile source
Using the GeoViews (geographic) extension for HoloViews, we can display a map in the background. Just declare a Bokeh WMTSTileSource and pass it to the gv.WMTS Element, then we can overlay it
Step6: Aggregating with a variable
So far we have simply been counting taxi dropoffs, but our dataset is much richer than that. We have information about a number of variables including the total cost of a taxi ride, the total_amount. Datashader provides a number of aggregator functions, which you can supply to the datashade operation. Here use the ds.mean aggregator to compute the average cost of a trip at a dropoff location
Step7: Grouping by a variable
Because datashading happens only just before visualization, you can use any of the techniques shown in previous sections to select, filter, or group your data before visualizing it, such as grouping it by the hour of day
Step8: Additional features
The actual points are never given directly to Bokeh, and so the normal Bokeh hover (and other) tools will not normally be useful with Datashader output. However, we can easily verlay an invisible QuadMesh to reveal information on hover, providing information about values in a local area while still only ever sending a fixed-size array to the browser to avoid issues with large data. | Python Code:
import pandas as pd
import holoviews as hv
import dask.dataframe as dd
import datashader as ds
import geoviews as gv
from holoviews.operation.datashader import datashade, aggregate
hv.extension('bokeh')
Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a>
<div style="float:right;"><h2>07. Working with large datasets</h2></div>
HoloViews supports even high-dimensional datasets easily, and the standard mechanisms discussed already work well as long as you select a small enough subset of the data to display at any one time. However, some datasets are just inherently large, even for a single frame of data, and cannot safely be transferred for display in any standard web browser. Luckily, HoloViews makes it simple for you to use the separate datashader together with any of the plotting extension libraries, including Bokeh and Matplotlib. The datashader library is designed to complement standard plotting libraries by providing faithful visualizations for very large datasets, focusing on revealing the overall distribution, not just individual data points.
Datashader uses computations accellerated using Numba, making it fast to work with datasets of millions or billions of datapoints stored in dask dataframes. Dask dataframes provide an API that is functionally equivalent to pandas, but allows working with data out of core while scaling out to many processors and even clusters. Here we will use Dask to load a large CSV file of taxi coordinates.
<div>
<img align="left" src="./assets/numba.png" width='140px'/>
<img align="left" src="./assets/dask.png" width='85px'/>
<img align="left" src="./assets/datashader.png" width='158px'/>
</div>
How does datashader work?
<img src="./assets/datashader_pipeline.png" width="80%"/>
Tools like Bokeh map Data (left) directly into an HTML/JavaScript Plot (right)
datashader instead renders Data into a plot-sized Aggregate array, from which an Image can be constructed then embedded into a Bokeh Plot
Only the fixed-sized Image needs to be sent to the browser, allowing millions or billions of datapoints to be used
Every step automatically adjusts to the data, but can be customized
When not to use datashader
Plotting less than 1e5 or 1e6 data points
When every datapoint matters; standard Bokeh will render all of them
For full interactivity (hover tools) with every datapoint
When to use datashader
Actual big data; when Bokeh/Matplotlib have trouble
When the distribution matters more than individual points
When you find yourself sampling or binning to better understand the distribution
End of explanation
ddf = dd.read_csv('../data/nyc_taxi.csv', parse_dates=['tpep_pickup_datetime'])
ddf['hour'] = ddf.tpep_pickup_datetime.dt.hour
# If your machine is low on RAM (<8GB) don't persist (though everything will be much slower)
ddf = ddf.persist()
print('%s Rows' % len(ddf))
print('Columns:', list(ddf.columns))
Explanation: Load the data
As a first step we will load a large dataset using dask. If you have followed the setup instructions you will have downloaded a large CSV containing 12 million taxi trips. Let's load this data using dask to create a dataframe ddf:
End of explanation
points = hv.Points(ddf, kdims=['dropoff_x', 'dropoff_y'])
Explanation: Create a dataset
In previous sections we have already seen how to declare a set of Points from a pandas DataFrame. Here we do the same for a Dask dataframe passed in with the desired key dimensions:
End of explanation
%opts RGB [width=600 height=500 bgcolor="black"]
datashade(points)
Explanation: We could now simply type points, and Bokeh will attempt to display this data as a standard Bokeh plot. Before doing that, however, remember that we have 12 million rows of data, and no current plotting program will handle this well! Instead of letting Bokeh see this data, let's convert it to something far more tractable using the datashader operation. This operation will aggregate the data on a 2D grid, apply shading to assign pixel colors to each bin in this grid, and build an RGB Element (just a fixed-sized image) we can safely display:
End of explanation
datashade.streams
# Exercise: Plot the taxi pickup locations ('pickup_x' and 'pickup_y' columns)
# Warning: Don't try to display hv.Points() directly; it's too big! Use datashade() for any display
# Optional: Change the cmap on the datashade operation to inferno
from datashader.colors import inferno
Explanation: If you zoom in you will note that the plot rerenders depending on the zoom level, which allows the full dataset to be explored interactively even though only an image of it is ever sent to the browser. The way this works is that datashade is a dynamic operation that also declares some linked streams. These linked streams are automatically instantiated and dynamically supply the plot size, x_range, and y_range from the Bokeh plot to the operation based on your current viewport as you zoom or pan:
End of explanation
%opts RGB [xaxis=None yaxis=None]
import geoviews as gv
from bokeh.models import WMTSTileSource
url = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{Z}/{Y}/{X}.jpg'
wmts = WMTSTileSource(url=url)
gv.WMTS(wmts) * datashade(points)
# Exercise: Overlay the taxi pickup data on top of the Wikipedia tile source
wiki_url = 'https://maps.wikimedia.org/osm-intl/{Z}/{X}/{Y}@2x.png'
Explanation: Adding a tile source
Using the GeoViews (geographic) extension for HoloViews, we can display a map in the background. Just declare a Bokeh WMTSTileSource and pass it to the gv.WMTS Element, then we can overlay it:
End of explanation
selected = points.select(total_amount=(None, 1000))
selected.data = selected.data.persist()
gv.WMTS(wmts) * datashade(selected, aggregator=ds.mean('total_amount'))
# Exercise: Use the ds.min or ds.max aggregator to visualize ``tip_amount`` by dropoff location
# Optional: Eliminate outliers by using select
Explanation: Aggregating with a variable
So far we have simply been counting taxi dropoffs, but our dataset is much richer than that. We have information about a number of variables including the total cost of a taxi ride, the total_amount. Datashader provides a number of aggregator functions, which you can supply to the datashade operation. Here use the ds.mean aggregator to compute the average cost of a trip at a dropoff location:
End of explanation
%opts Image [width=600 height=500 logz=True xaxis=None yaxis=None]
taxi_ds = hv.Dataset(ddf)
grouped = taxi_ds.to(hv.Points, ['dropoff_x', 'dropoff_y'], groupby=['hour'], dynamic=True)
aggregate(grouped).redim.values(hour=range(24))
# Exercise: Facet the trips in the morning hours as an NdLayout using aggregate(grouped.layout())
# Hint: You can reuse the existing grouped variable or select a subset before using the .to method
Explanation: Grouping by a variable
Because datashading happens only just before visualization, you can use any of the techniques shown in previous sections to select, filter, or group your data before visualizing it, such as grouping it by the hour of day:
End of explanation
%%opts QuadMesh [width=800 height=400 tools=['hover']] (alpha=0 hover_line_alpha=1 hover_fill_alpha=0)
hover_info = aggregate(points, width=40, height=20, streams=[hv.streams.RangeXY]).map(hv.QuadMesh, hv.Image)
gv.WMTS(wmts) * datashade(points) * hover_info
Explanation: Additional features
The actual points are never given directly to Bokeh, and so the normal Bokeh hover (and other) tools will not normally be useful with Datashader output. However, we can easily verlay an invisible QuadMesh to reveal information on hover, providing information about values in a local area while still only ever sending a fixed-size array to the browser to avoid issues with large data.
End of explanation |
15,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 12
Step1: That's everything we need for a working function! Let's walk through it.
Step2: def keyword
Step3: Other notes on functions
You can define functions (as we did just before) almost anywhere in your code. As we'll see when we get to functional programming, you can literally define functions in the middle of a line of code. Still, good coding practices behooves you to generally group your function definitions together, e.g. at the top of your module.
Invoking or activating a function is referred to as calling the function.
Functions can be part of modules. You've already seen some of these in action
Step4: Now the array() method can be called directly without prepending the package name numpy in front. USE THIS CAUTIOUSLY
Step5: Like functions, you can name the arguments anything you want, though also like functions you'll probably want to give them more meaningful names besides arg1, arg2, and arg3. When these become just three functions among hundreds in a massive codebase written by dozens of different people, it's helpful when the code itself gives you hints as to what it does.
When you call a function, you'll need to provide the same number of arguments in the function call as appear in the function header, otherwise Python will yell at you.
Step6: To be fair, it's a pretty easy error to diagnose, but still something to keep in mind--especially as we move beyond basic "positional" arguments (as they are so called in the previous error message) into optional arguments.
Default arguments
"Positional" arguments--the only kind we've seen so far--are required. If the function header specifies a positional argument, then every single call to that functions needs to have that argument specified.
There are cases, however, where it can be helpful to have optional, or default, arguments. In this case, when the function is called, the programmer can decide whether or not they want to override the default values.
You can specify default arguments in the function header
Step7: If you look through the NumPy online documentation, you'll find most of its functions have entire books' worth of default arguments.
The numpy.array function we've been using has quite a few; the only positional (required) argument for that function is some kind of list/array structure to wrap a NumPy array around. Everything else it tries to figure out on its own, unless the programmer explicitly specifies otherwise.
Step8: Notice the decimal points that follow the values in the second array! This is NumPy's way of showing that these numbers are floats, not integers!
Keyword Arguments
Keyword arguments are a something of a superset of positional and default arguments.
By the names, positional seems to imply a relationship with position (specifically, position in the list of arguments), and default seems obvious enough
Step9: In this example, we switched the ordering of the arguments between the two function calls; consequently, the ordering of the arguments inside the function were also flipped. Hence, positional
Step10: As you can see, we used the names of the arguments from the function header itself, setting them equal to the variable we wanted to use for that argument.
Consequently, order doesn't matter--Python can see that, in both function calls, we're setting name1 = pet1 and name2 = pet2.
Even though keyword arguments somewhat obviate the need for strictly positional arguments, keyword arguments are extremely useful when it comes to default arguments.
If you take a look at any NumPy API--even the documentation for numpy.array--there are LOTS of default arguments. Trying to remember their ordering is a pointless task. What's much easier is to simply remember the name of the argument--the keyword--and use that to override any default argument you want to change.
Ordering of the keyword arguments doesn't matter; that's why we can specify some of the default parameters by keyword, leaving others at their defaults, and Python doesn't complain.
Here's an important distinction, though
Step11: Arbitrary Argument Lists
There are instances where you'll want to pass in an arbitrary number of arguments to a function, a number which isn't known until the function is called and could change from call to call!
On one hand, you could consider just passing in a single list, thereby obviating the need. That's more or less what actually happens here, but the syntax is a tiny bit different.
Here's an example
Step12: Inside the function, it's basically treated as a list
Step13: This is pretty basic
Step14: You can even return multiple values simultaneously from a function. They're just treated as tuples!
Step15: This two-way communication that functions enable--arguments as input, return values as output--is an elegant and powerful way of allowing you to design modular and human-understandable code.
Part 4
Step16: Once the function finishes running, what is the value of x?
Step17: It prints 10. Can anyone explain why?
Let's take another, slightly different, example.
Step18: Once the function finishes running, what is the value of x? | Python Code:
def our_function():
pass
Explanation: Lecture 12: Functions
CBIO (CSCI) 4835/6835: Introduction to Computational Biology
Overview and Objectives
In this lecture, we'll introduce the concept of functions, critical abstractions in nearly every modern programming language. Functions are important for abstracting and categorizing large codebases into smaller, logical, and human-digestable components. By the end of this lecture, you should be able to:
Define a function that performs a specific task
Set function arguments and return values
Differentiate positional arguments from keyword arguments
Construct functions that take any number of arguments, in positional or key-value format
Part 1: Defining Functions
A function in Python is not very different from a function as you've probably learned since algebra.
"Let $f$ be a function of $x$"...sound familiar? We're basically doing the same thing here.
A function ($f$) will [usually] take something as input ($x$), perform some kind of operation on it, and then [usually] return a result ($y$). Which is why we usually see $f(x) = y$. A function, then, is composed of three main components:
1: The function itself. A [good] function will have one very specific task it performs. This task is usually reflected in its name. Take the examples of print, or sqrt, or exp, or log; all these names are very clear about what the function does.
2: Arguments (if any). Arguments (or parameters) are the input to the function. It's possible a function may not take any arguments at all, but often at least one is required. For example, print has 1 argument: a string.
3: Return values (if any). Return values are the output of the function. It's possible a function may not return anything; technically, print does not return anything. But common math functions like sqrt or log have clear return values: the output of that math operation.
Philosophy
A core tenet in writing functions is that functions should do one thing, and do it well (with apologies to the Unix Philosophy).
Writing good functions makes code much easier to troubleshoot and debug, as the code is already logically separated into components that perform very specific tasks. Thus, if your application is breaking, you usually have a good idea where to start looking.
WARNING: It's very easy to get caught up writing "god functions": one or two massive functions that essentially do everything you need your program to do. But if something breaks, this design is very difficult to debug.
Functions vs Methods
You've probably heard the term "method" before, in this class. Quite often, these two terms are used interchangeably, and for our purposes they are pretty much the same.
BUT. These terms ultimately identify different constructs, so it's important to keep that in mind. Specifically:
Methods are functions inside classes (not really covered in this course).
Functions are not inside classes. In some sense, they're "free" (though they may be found inside specific modules; however, since a module != a class, they're still called functions).
Otherwise, functions and methods work identically.
So how do we write functions? At this point in the course, you've probably already seen how this works, but we'll go through it step by step regardless.
First, we define the function header. This is the portion of the function that defines the name of the function, the arguments, and uses the Python keyword def to make everything official:
End of explanation
def our_function():
pass
Explanation: That's everything we need for a working function! Let's walk through it.
End of explanation
# Call the function!
our_function()
# Nothing happens...no print statement, no computations, nothing. But there's no error either...so, yay?
Explanation: def keyword: required before writing any function, to tell Python "hey! this is a function!"
Function name: one word (can "fake" spaces with underscores), which is the name of the function and how we'll refer to it later
Arguments: a comma-separated list of arguments the function takes to perform its task. If no arguments are needed (as above), then just open-paren-close-paren.
Colon: the colon indicates the end of the function header and the start of the actual function's code.
pass: since Python is sensitive to whitespace, we can't leave a function body blank; luckily, there's the pass keyword that does pretty much what it sounds like--no operation at all, just a placeholder.
Admittedly, our function doesn't really do anything interesting. It takes no parameters, and the function body consists exclusively of a placeholder keyword that also does nothing. Still, it's a perfectly valid function!
End of explanation
from numpy import array
Explanation: Other notes on functions
You can define functions (as we did just before) almost anywhere in your code. As we'll see when we get to functional programming, you can literally define functions in the middle of a line of code. Still, good coding practices behooves you to generally group your function definitions together, e.g. at the top of your module.
Invoking or activating a function is referred to as calling the function.
Functions can be part of modules. You've already seen some of these in action: the numpy.array() functionality is indeed a function.
Though not recommended, it's possible to import only select functions from a module, so you no longer have to specify the module name in front of the function name when calling the function. This uses the from keyword during import:
End of explanation
def one_arg(arg1):
pass
def two_args(arg1, arg2):
pass
def three_args(arg1, arg2, arg3):
pass
# And so on...
Explanation: Now the array() method can be called directly without prepending the package name numpy in front. USE THIS CAUTIOUSLY: if you accidentally name a variable array later in your code, you will get some very strange errors!
Part 2: Function Arguments
Arguments (or parameters), as stated before, are the function's input; the "$x$" to our "$f$", as it were.
You can specify as many arguments as want, separating them by commas:
End of explanation
one_arg("some arg")
two_args("some arg")
two_args("some arg", "another arg")
Explanation: Like functions, you can name the arguments anything you want, though also like functions you'll probably want to give them more meaningful names besides arg1, arg2, and arg3. When these become just three functions among hundreds in a massive codebase written by dozens of different people, it's helpful when the code itself gives you hints as to what it does.
When you call a function, you'll need to provide the same number of arguments in the function call as appear in the function header, otherwise Python will yell at you.
End of explanation
def func_with_default_arg(positional, default = 10):
print("'" + positional + "' with default arg '" + str(default) + "'")
func_with_default_arg("Input string")
func_with_default_arg("Input string", default = 999)
Explanation: To be fair, it's a pretty easy error to diagnose, but still something to keep in mind--especially as we move beyond basic "positional" arguments (as they are so called in the previous error message) into optional arguments.
Default arguments
"Positional" arguments--the only kind we've seen so far--are required. If the function header specifies a positional argument, then every single call to that functions needs to have that argument specified.
There are cases, however, where it can be helpful to have optional, or default, arguments. In this case, when the function is called, the programmer can decide whether or not they want to override the default values.
You can specify default arguments in the function header:
End of explanation
import numpy as np
x = np.array([1, 2, 3])
y = np.array([1, 2, 3], dtype = float) # Specifying the data type of the array, using "dtype"
print(x)
print(y)
Explanation: If you look through the NumPy online documentation, you'll find most of its functions have entire books' worth of default arguments.
The numpy.array function we've been using has quite a few; the only positional (required) argument for that function is some kind of list/array structure to wrap a NumPy array around. Everything else it tries to figure out on its own, unless the programmer explicitly specifies otherwise.
End of explanation
def pet_names(name1, name2):
print("Pet 1: " + name1)
print("Pet 2: " + name2)
pet1 = "King"
pet2 = "Reginald"
pet_names(pet1, pet2)
pet_names(pet2, pet1)
Explanation: Notice the decimal points that follow the values in the second array! This is NumPy's way of showing that these numbers are floats, not integers!
Keyword Arguments
Keyword arguments are a something of a superset of positional and default arguments.
By the names, positional seems to imply a relationship with position (specifically, position in the list of arguments), and default seems obvious enough: it takes on a default value unless otherwise specified.
Keyword arguments can overlap with both, in that they can be either required or default, but provide a nice utility by which you can ensure the variable you're passing into a function is taking on the exact value you want it to.
Let's take the following function.
End of explanation
pet1 = "Rocco"
pet2 = "Lucy"
pet_names(name1 = pet1, name2 = pet2)
pet_names(name2 = pet2, name1 = pet1)
Explanation: In this example, we switched the ordering of the arguments between the two function calls; consequently, the ordering of the arguments inside the function were also flipped. Hence, positional: position matters.
In contrast, Python also has keyword arguments, where order no longer matters as long as you specify the keyword.
We can use the same function as before, pet_names, only this time we'll use the names of the arguments themselves (aka, keywords):
End of explanation
# Here's our function with a default argument.
def pos_def(x, y = 10):
return x + y
# Using keywords in the same order they're defined is totally fine.
z = pos_def(x = 10, y = 20)
print(z)
# Mixing their ordering is ok, as long as I'm specifying the keywords.
z = pos_def(y = 20, x = 10)
print(z)
# Only specifying the default argument is a no-no.
z = pos_def(y = 20)
print(z)
Explanation: As you can see, we used the names of the arguments from the function header itself, setting them equal to the variable we wanted to use for that argument.
Consequently, order doesn't matter--Python can see that, in both function calls, we're setting name1 = pet1 and name2 = pet2.
Even though keyword arguments somewhat obviate the need for strictly positional arguments, keyword arguments are extremely useful when it comes to default arguments.
If you take a look at any NumPy API--even the documentation for numpy.array--there are LOTS of default arguments. Trying to remember their ordering is a pointless task. What's much easier is to simply remember the name of the argument--the keyword--and use that to override any default argument you want to change.
Ordering of the keyword arguments doesn't matter; that's why we can specify some of the default parameters by keyword, leaving others at their defaults, and Python doesn't complain.
Here's an important distinction, though:
Default (optional) arguments are always keyword arguments, but...
Positional (required) arguments MUST come before default arguments!
In essence, when using the argument keywords, you can't mix-and-match the ordering of positional and default arguments.
(you can't really mix-and-match the ordering of positional and default arguments anyway, so hopefully this isn't a rude awakening)
Here's an example of this behavior in action:
End of explanation
def make_pizza(*toppings):
print("Making a pizza with the following toppings:")
for topping in toppings:
print(" - " + topping)
make_pizza("pepperoni")
make_pizza("pepperoni", "banana peppers", "green peppers", "mushrooms")
Explanation: Arbitrary Argument Lists
There are instances where you'll want to pass in an arbitrary number of arguments to a function, a number which isn't known until the function is called and could change from call to call!
On one hand, you could consider just passing in a single list, thereby obviating the need. That's more or less what actually happens here, but the syntax is a tiny bit different.
Here's an example: a function which lists out pizza toppings. Note the format of the input argument(s):
End of explanation
def identity_function(in_arg):
return in_arg
x = "this is the function input"
return_value = identity_function(x)
print(return_value)
Explanation: Inside the function, it's basically treated as a list: in fact, it is a list.
So why not just make the input argument a single variable which is a list?
Convenience.
In some sense, it's more intuitive to the programmer calling the function to just list out a bunch of things, rather than putting them all in a list structure first.
Part 3: Return Values
Just as functions [can] take input, they also [can] return output for the programmer to decide what to do with.
Almost any function you will ever write will most likely have a return value of some kind. If not, your function may not be "well-behaved", aka sticking to the general guideline of doing one thing very well.
There are certainly some cases where functions won't return anything--functions that just print things, functions that run forever (yep, they exist!), functions designed specifically to test other functions--but these are highly specialized cases we are not likely to encounter in this course. Keep this in mind as a "rule of thumb."
To return a value from a function, just use the return keyword:
End of explanation
def compute_square(number):
square = number ** 2
return square
start = 3
end = compute_square(start)
print("Square of " + str(start) + " is " + str(end))
Explanation: This is pretty basic: the function returns back to the programmer as output whatever was passed into the function as input. Hence, "identity function."
Anything you can pass in as function parameters, you can return as function output, including lists:
End of explanation
import numpy.random as r
def square_and_rand(number):
square = compute_square(number)
rand_num = r.randint(0, 100)
return rand_num, square
retvals = square_and_rand(3)
print(retvals)
Explanation: You can even return multiple values simultaneously from a function. They're just treated as tuples!
End of explanation
def magic_function(x):
x = 20
print("Inside function: x = " + str(x))
x = 10
print("Before calling 'magic_function': x = " + str(x))
# Now, let's call magic_function(). What is x = ?
magic_function(x)
Explanation: This two-way communication that functions enable--arguments as input, return values as output--is an elegant and powerful way of allowing you to design modular and human-understandable code.
Part 4: A Note on Modifying Arguments
This is arguably one of the trickiest parts of programming, so please ask questions if you're having trouble.
Let's start with an example to illustrate what's this is. Take the following code:
End of explanation
print(x)
Explanation: Once the function finishes running, what is the value of x?
End of explanation
def magic_function2(x):
x[0] = 20
print("Inside function: x = " + str(x))
x = [10, 10]
print("Before function: x = " + str(x))
# Now, let's call magic_function2(x). What is x = ?
magic_function2(x)
Explanation: It prints 10. Can anyone explain why?
Let's take another, slightly different, example.
End of explanation
print(x)
Explanation: Once the function finishes running, what is the value of x?
End of explanation |
15,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Prepare Problem
a) Load libraries
Step1: b) Load dataset
Download customer account data from Wiley's website, RetailMart.xlsx
Step2: The 'Pregnant' column can only take on one of two (in this case) possabilities. Here 1 = pregnant, and 0 = not pregnant
2. Summarize Data
a) Descriptive statistics
Step3: We can see no features with significant correlation coefficents (i.e., $r$ values > 0.7)
3. Prepare Data
a) Data Transforms
We need to 'dummify' (i.e., separate out) the catagorical variables
Step4: 4. Evaluate Algorithms
a) Split-out validation dataset
Step5: b) Spot Check Algorithms
Step6: c) Select The Best Model
Step7: 5. Make predictions on validation dataset
Linear Discriminant Analysis is just about the most accurate model. Now test the accuracy of the model on the validation dataset. | Python Code:
import pandas as pd
import numpy as np
from pandas.tools.plotting import scatter_matrix
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn import metrics
Explanation: 1. Prepare Problem
a) Load libraries
End of explanation
# find path to your RetailMart.xlsx
dataset = pd.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch06/RetailMart.xlsx','rb'), sheetname=0)
dataset = dataset.drop('Unnamed: 17', 1) # drop empty col
dataset.rename(columns={'PREGNANT':'Pregnant'}, inplace=True)
dataset.rename(columns={'Home/Apt/ PO Box':'Residency'}, inplace=True) # add simpler col name
dataset.columns = [x.strip().replace(' ', '_') for x in dataset.columns] # python does not like spaces in var names
Explanation: b) Load dataset
Download customer account data from Wiley's website, RetailMart.xlsx
End of explanation
# shape
print(dataset.shape)
# types
print(dataset.dtypes)
# head
dataset.head()
# feature distribution
print(dataset.groupby('Implied_Gender').size())
# target distribution
print(dataset.groupby('Pregnant').size())
# correlation
r = dataset.corr(method='pearson')
id_matrix = np.identity(r.shape[0]) # create identity matrix
r = r-id_matrix # remove same-feature correlations
np.where( r > 0.7 )
Explanation: The 'Pregnant' column can only take on one of two (in this case) possabilities. Here 1 = pregnant, and 0 = not pregnant
2. Summarize Data
a) Descriptive statistics
End of explanation
# dummify gender variable
dummy_gender = pd.get_dummies(dataset['Implied_Gender'], prefix='Gender')
print(dummy_gender.head())
# dummify residency variable
dummy_resident = pd.get_dummies(dataset['Residency'], prefix='Resident')
print(dummy_resident.head())
# Drop catagorical variables
dataset = dataset.drop('Implied_Gender', 1)
dataset = dataset.drop('Residency', 1)
# Add dummy variables
dataset = pd.concat([dummy_gender.ix[:, 'Gender_M':],dummy_resident.ix[:, 'Resident_H':],dataset], axis=1)
dataset.head()
# Make clean dataframe for regression model
array = dataset.values
n_features = len(array[0])
X = array[:,0:n_features-1] # features
y = array[:,n_features-1] # target
Explanation: We can see no features with significant correlation coefficents (i.e., $r$ values > 0.7)
3. Prepare Data
a) Data Transforms
We need to 'dummify' (i.e., separate out) the catagorical variables: implied gender and residency
End of explanation
# Split-out validation dataset
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = train_test_split(X, y,
test_size=validation_size, random_state=seed)
Explanation: 4. Evaluate Algorithms
a) Split-out validation dataset
End of explanation
# Spot-Check Algorithms
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))
# evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = KFold(n_splits=10, random_state=seed)
cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy')
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
Explanation: b) Spot Check Algorithms
End of explanation
# Compare Algorithms
fig = plt.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
Explanation: c) Select The Best Model
End of explanation
lda = LinearDiscriminantAnalysis()
lda.fit(X_train, Y_train)
predictions = lda.predict(X_validation)
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))
# predict probability of survival
y_pred_prob = lda.predict_proba(X_validation)[:, 1]
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(Y_validation, y_pred_prob)
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlim([-0.05, 1.0])
plt.ylim([0.0, 1.05])
plt.gca().set_aspect('equal', adjustable='box')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.show()
# calculate AUC
print(metrics.roc_auc_score(Y_validation, y_pred_prob))
Explanation: 5. Make predictions on validation dataset
Linear Discriminant Analysis is just about the most accurate model. Now test the accuracy of the model on the validation dataset.
End of explanation |
15,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Same tweet belongs to multiple datasets
Step1: Merge URL types
Step2: Add location states
df_merged_meta_cats.u_location.value_counts().to_csv("USER_LOCATIONS.txt", sep="\t", encoding='utf-8')
! head USER_LOCATIONS.txt
! python process_user_locations.py ## RUN using python3 from command line | Python Code:
df_merged_meta.t_id.value_counts().head()
df_merged_meta[df_merged_meta.t_id == 700042121877835776][["topic_name"]]
df_merged_meta.t_id.value_counts()[df_merged_meta.t_id.value_counts() > 1]
df_merged_meta[df_merged_meta.t_id == 792354716521009152].T
df_merged_meta["is_controversial"] = df_merged_meta.topic_name.isin(CONTROVERSIAL_TOPICS)
df_merged_meta.is_controversial.value_counts()
Explanation: Same tweet belongs to multiple datasets
End of explanation
df_merged_meta.columns
df_mapped_cats = pd.read_csv("TID_URL_CATS.txt", sep="\t").assign(
CATS=lambda x: x.CATS.apply(lambda k: k.split("|"))
)
df_mapped_cats.head()
URL_DICT = dict(zip(df_mapped_cats.URL.values, df_mapped_cats.CATS.values))
URL_DICT["http://TinyURL.com/NewYearCure"]
len(URL_DICT)
df_mapped_cats.TID.value_counts().head()
df_mapped_cats[df_mapped_cats.TID == 700152617033289728]
df_tweet_cat_counts = df_mapped_cats.groupby("TID")["CATS"].apply(lambda x: sum(x, []))
df_tweet_cat_counts.head()
df_tweet_cat_counts.reset_index().dtypes
df_merged_meta.shape
df_merged_meta.t_id.value_counts().head()
df_merged_meta_cats = df_merged_meta.merge(
df_tweet_cat_counts.reset_index(), how="left", left_on="t_id", right_on="TID")
df_merged_meta_cats.columns
Explanation: Merge URL types
End of explanation
df_places = pd.read_csv("PARSED_STATES.final.txt", sep="\t")
df_places = df_places.rename(columns={
"location": "u_location", "parse_manual": "u_state"
})[["u_location", "u_state"]]
df_places.head()
df_merged_meta_cats = df_merged_meta_cats.merge(df_places, how="left", on="u_location")
df_merged_meta_cats.u_state.head()
df_merged_meta_cats.t_id.value_counts().head()
df_merged_meta_cats.to_hdf("FINAL_ANALYSIS_DATA.h5", "final_data")
Explanation: Add location states
df_merged_meta_cats.u_location.value_counts().to_csv("USER_LOCATIONS.txt", sep="\t", encoding='utf-8')
! head USER_LOCATIONS.txt
! python process_user_locations.py ## RUN using python3 from command line
End of explanation |
15,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
History of Machine Learning
The field of machine learning has its roots in Artificial intelligence (AI), which started in 1950 with the seminal paper Computing Machinery and Intelligence. In this paper, Alan Turing posed a key question
Step1: The first generation of AI researchers were John McCarthy
(who invented Lisp and established Stanford AI Laboratory),
Marvin Minsky and Frank Rosenblatt (who invented perceptron algorithm)
and their work consisted of building theorem provers, natural language processing,
logic programming, etc.
<!---
### Neural Network
### Logic Programming
-->
Knowledge base systems
The next generation of AI research was on developing expert systems where the focus was building a system that emulates the decision-making ability of a human expert. The expert system is an example of a broad class of system, commonly referred to as Knowledge base system. The Knowledge base system consists of two primary components
Step2: Neural networks returns
With the invention of back-propogation algorithm, research on neural networks
(which was almost stagnant due to
criticism by Marvin Minsky on Rosenblatt's perceptron) revived during 1980s.
VC Theory
From 1980 to early 1990s, Vladimir Vapnik introduced Vapnik-Chervonenkis theory (VC theory),
that attempts to explain the learning process from a statistical point of view.
The key question of this new field machine learning that was based on Vapnik's framework
was no longer Turing's grand question, but much more simple and realistic one
Step3: The success of VC theory for number of problems like handwriting recognition, using support vector machines (SVM),
generated a lot of interests in machine learning community.
Bayesian Theory
There is equally interesting parallel timeline in the field of (Bayesian) Statistical Inference. This field is addressed by various
names
Step4: Step 2
Step5: This is an extremely crucial step and if done incorrectly can screw up the whole learning process (atleast for most if not all
problems). So, most applied machine learning researchers spend lot of time with domain experts understanding, extracting and
tuning the features<sup>3</sup>.
This step is usually accompanied with data cleaning (which is removing noisy data), but we will skip that for now.
Step 3
Step6: General Machine Learning Setup
Perform feature selection and feature extraction to get the input data;
Partition the input data into training, validation and test data;
foreach model do
Model Selection | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('sXx-PpEBR7k')
Explanation: Introduction
History of Machine Learning
The field of machine learning has its roots in Artificial intelligence (AI), which started in 1950 with the seminal paper Computing Machinery and Intelligence. In this paper, Alan Turing posed a key question: "Can machine think?" and also introduced the popular Turing test:
A computer program is said to be intelligent if it could carry on a conversation that was indistinguishable from a conversation with a human being.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('_Xcmh1LQB9I')
Explanation: The first generation of AI researchers were John McCarthy
(who invented Lisp and established Stanford AI Laboratory),
Marvin Minsky and Frank Rosenblatt (who invented perceptron algorithm)
and their work consisted of building theorem provers, natural language processing,
logic programming, etc.
<!---
### Neural Network
### Logic Programming
-->
Knowledge base systems
The next generation of AI research was on developing expert systems where the focus was building a system that emulates the decision-making ability of a human expert. The expert system is an example of a broad class of system, commonly referred to as Knowledge base system. The Knowledge base system consists of two primary components:
Knowledge base Or fact base: A database that stores collection of facts or assertions (and may be even entities and relationships between entities).
Inference engine: applies set of logical rules on the facts available in the knowledge base to deduce new facts.
Here are some popular knowledge base systems:
In 1970, Shortliffe and others at Stanford developed an expert system for medical diagnosis called Mycin. Mycin would ask the physician a series of "yes or no" questions and would return list of (diagnosis (i.e. culprit bacteria), probability, reasoning behind the diagnosis, recommended drug treatment). To specify the "reasoning behind the diagnosis", Mycin would return list of questions/answers along with the set of rules that it used to come to the diagnosis.
Cyc developed by Douglas Lenat in 1984.
Sub-components of IBM Watson
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('jmMcJ4XlrWM', start=195, end=234)
Explanation: Neural networks returns
With the invention of back-propogation algorithm, research on neural networks
(which was almost stagnant due to
criticism by Marvin Minsky on Rosenblatt's perceptron) revived during 1980s.
VC Theory
From 1980 to early 1990s, Vladimir Vapnik introduced Vapnik-Chervonenkis theory (VC theory),
that attempts to explain the learning process from a statistical point of view.
The key question of this new field machine learning that was based on Vapnik's framework
was no longer Turing's grand question, but much more simple and realistic one:
"(Given enough examples) Can machine learn something about them ?":
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P,
if its performance at tasks in T, as measured by P, improves with experience E ... by Tom Mitchell.
Here experience E refers to data (or examples), performance measure P usually refers to accuracy (or some other metrics) and
the task T refers to machine learning algorithms.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn import datasets
digits = datasets.load_digits()
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:10]):
plt.subplot(2, 10, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r)
plt.show()
Explanation: The success of VC theory for number of problems like handwriting recognition, using support vector machines (SVM),
generated a lot of interests in machine learning community.
Bayesian Theory
There is equally interesting parallel timeline in the field of (Bayesian) Statistical Inference. This field is addressed by various
names: Probability theory (before 1838), Inverse Probability (from 1838 to 1950) and Bayesian Analysis (after 1950). The
concepts of probability were formalized by Pascal and Fermat in 17th century and later made much more rigorous by Kolmogorov
in 1930 using measure theory. In 1761, Thomas Bayes proved following theorem (now known as Bayes theorem):
$$p(H|E) = \dfrac{p(E|H)p(H)}{p(E)}$$
This simple equation has extremely important implication. It says that if we believe the hypothesis H to be true with probability
P(H) (also called as prior) and if we are given additional evidence E from the experiment, we can then update our belief about
H to P(H | E) using the above equation. Wow ... doesn’t this essentially summarize how a scientist should think ?
Based on what prior was used, the research in this field can be classified into four distinct eras. The first era (Objective Bayesians) started with Thomas Bayes and Laplace, who liked to use uniform priors for most of the problems<sup>1</sup>. This continued
till early 19th century<sup>2</sup> where intensive efforts were made to circumvent the priors. Important researchers during this era (also
called as frequentists) were Fisher (likelihood methods, fiducial inference), Neyman (founded frequentism) and Wald (Decision
theory) and the key question they were interested in was how the results would change if you ran a procedure over and over
again, with data changing each time. This is considered to be golden era for statistical inference as lot of discoveries were made
in extremely short time. Around 1950s, during the third era (Subjective Bayesians), the researchers like Savage and de Finetti
proposed that one should sit down with domain expert to find appropriate priors for any problem. In the fourth era, researchers
like Harold Jeffreys (who was famous for his critique about p-value: ‘An hypothesis that may be true, may be rejected because
it has not predicted observable results that have not occurred’) revived the field of objective bayes. The key idea of objective
bayesian methodology is to use frequentist analytic tools to guide their choice of priors.
Both Objective bayesian methodology and Vapnik’s framework are widely used in the field of machine learning.
Deep Learning
TODO:
Here is a Venn diagram from the deep learning book, describing the relationship between the sub-fields of AI:
<img src="images/venn_diagram_dl_ml.png" alt="Venn Diagram" style="width: 400px;"/>
When to use machine learning
Machine Learning approach is suitable if the problem satisfies following three criteria:
There exists sufficiently large data D<sub>n</sub> = { (X<sub>i</sub>, Y<sub>i</sub>) }<sub>i = 1 .. n</sub>.
There exists a pattern in the data that you intend to learn. That is, there exist a target function (or distribution) f: X -> Y which maps input feature vector to output label.
We cannot pin that pattern down mathematically or algorithmically using set of rules (i.e. f is unknown).
Other hints that you can use to determine whether machine learning is suitable for your problem:
Don’t know how to calculate the output from the input (eg: medical diagnosis, bioinformatics, biomedical imagery/computer vision).
Requirements change rapidly (eg: spam filtering, search engines).
Environment in which the system operates change rapidly (eg: stock market, video games, robotics).
There exists tremendous individual variability (eg: recommendations, speech/handwriting recognition).
Stages of Machine Learning
Step 1: Data collection
In this step, we capture the information of real world object and store it in the computer either as text, image, audio file, video file or in one of common datastores (i.e. database, knowledge base, etc).
Let's use a simple example of recognizing digits. First, we ask set of volunteers to write numbers from 0 to 9 on piece of black paper and then scan it into a jpeg file.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn import datasets
digits = datasets.load_digits()
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:1]):
plt.subplot(2, 1, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r)
plt.title('Feature: ' + str(image.flatten()))
plt.show()
Explanation: Step 2: Feature representation
Now, we convert the binary object into a format required for machine learning algorithm. This is usually (but not necessarily) a D-dimensional numeric vector (called as feature):
End of explanation
from sklearn import datasets
from sklearn.utils import shuffle
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
X_digits, y_digits = shuffle(X_digits, y_digits, random_state=0)
n_samples = len(X_digits)
X_train = X_digits[:int(.8 * n_samples)]
y_train = y_digits[:int(.8 * n_samples)]
X_test = X_digits[int(.8 * n_samples):]
y_test = y_digits[int(.8 * n_samples):]
Explanation: This is an extremely crucial step and if done incorrectly can screw up the whole learning process (atleast for most if not all
problems). So, most applied machine learning researchers spend lot of time with domain experts understanding, extracting and
tuning the features<sup>3</sup>.
This step is usually accompanied with data cleaning (which is removing noisy data), but we will skip that for now.
Step 3: Dimensionality reduction
For computational feasibility and for machine learning algorithm to work correctly, number of dimensions D should be reasonable (i.e. D <<< infinity). So, during this process, you will also use the following dimensionality reduction techniques:
Feature selection: chose relevant features for the model
Feature extraction: transform high-dimensional data to lower-dimensions
Both the methods helps to alleviate the curse of dimensionality, improve performance of the model and speed up the learning
process. The term "curse of dimensionality" (coined by Richard Bellman) refers to the fact that some problems become intractable as the number of variables increases. In Bayesian statistics, the problem of curse of dimensionality occurs while evaluating posterior distribution, which often has many parameters. However, this is usually overcomed by using MCMC methods.
The above two steps is often referred to as feature engineering.
Step 4: Training and Testing
After performing the above two steps for all the objects, we get the input data, which is divided into three parts:
Training data (about 60 to 70% of the input data): The training data is used to learn the parameters of the model (i.e. parameter selection).
Validation data (about 10 to 20% of the input data): The validation is used to learn the hyperparameters (i.e. model selection).
Test data (about 20% of the input data): The test data is used to test whether the model generalizes or not.
Let's ignore validation for now and see how to split the above data into training and testing.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.utils import shuffle
from systemml.mllearn import LogisticRegression
from pyspark.sql import SQLContext
sqlCtx = SQLContext(sc)
digits = datasets.load_digits()
X_digits = digits.data
y_digits = digits.target
X_digits, y_digits = shuffle(X_digits, y_digits, random_state=0)
n_samples = len(X_digits)
X_test = X_digits[int(.8 * n_samples):]
y_test = y_digits[int(.8 * n_samples):]
training_fraction = [0.005, 0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8]
scores = []
for frac in training_fraction:
X_train = X_digits[:int(frac * n_samples)]
y_train = y_digits[:int(frac * n_samples)]
classifier = LogisticRegression(sqlCtx)
score = classifier.fit(X_train, y_train).score(X_test, y_test)
scores = scores + [ score ]
plt.plot(training_fraction, scores)
plt.xlabel('Fraction of data used for training: E')
plt.ylabel('Prediction score (higher the better): P')
plt.show()
Explanation: General Machine Learning Setup
Perform feature selection and feature extraction to get the input data;
Partition the input data into training, validation and test data;
foreach model do
Model Selection: Learn hyperparameters by optimizing model selection criterion on validation data
foreach parameter do
Model fitting/training: Learn parameters by optimizing training criterion on training data
After above process, the learned model is m and the learned parameters are p. Use test data to verify the accuracy (or some other performance measure) of the model/parameter (m; p).
To test that we are actually learning, we slowly increase the number of training datapoints (i.e. experience) and see if the performance (i.e. score) increases. As an example, we will use a simple machine learning model (i.e. logistic regression) and plot the performance vs number of training datapoints.
Recall:
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
End of explanation |
15,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load TensorFlow
Go to Edit->Notebook settings to confirm you have a GPU accelerated kernel.
Step1: Set up FFN code and sample data
Colab already provides most of the dependencies.
Step2: Run inference | Python Code:
%tensorflow_version 1.x
import tensorflow as tf
print(tf.__version__)
# Silence deprecation warnings for now.
tf.logging.set_verbosity(tf.logging.ERROR)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
print('GPU device not found')
gpu = False
else:
print('Found GPU at: {}'.format(device_name))
gpu = True
Explanation: Load TensorFlow
Go to Edit->Notebook settings to confirm you have a GPU accelerated kernel.
End of explanation
!git clone https://github.com/google/ffn.git
%cd ffn
from google.protobuf import text_format
from ffn.inference import inference
from ffn.inference import inference_pb2
# Download the example datasets.
!mkdir -p third_party/neuroproof_examples
!gsutil rsync -r -x ".*.gz" gs://ffn-flyem-fib25/ third_party/neuroproof_examples/
Explanation: Set up FFN code and sample data
Colab already provides most of the dependencies.
End of explanation
config = '''image {
hdf5: "third_party/neuroproof_examples/training_sample2/grayscale_maps.h5:raw"
}
image_mean: 128
image_stddev: 33
checkpoint_interval: 1800
seed_policy: "PolicyPeaks"
model_checkpoint_path: "models/fib25/model.ckpt-27465036"
model_name: "convstack_3d.ConvStack3DFFNModel"
model_args: "{\\"depth\\": 12, \\"fov_size\\": [33, 33, 33], \\"deltas\\": [8, 8, 8]}"
segmentation_output_dir: "results/fib25/training2"
inference_options {
init_activation: 0.95
pad_value: 0.05
move_threshold: 0.9
min_boundary_dist { x: 1 y: 1 z: 1}
segment_threshold: 0.6
min_segment_size: 1000
}'''
req = inference_pb2.InferenceRequest()
_ = text_format.Parse(config, req)
runner = inference.Runner()
runner.start(req)
canvas, alignment = runner.make_canvas((0, 0, 0), (250, 250, 250))
# Create a single segment, starting from the specified origin point.
if gpu:
vis_update = 20
else:
vis_update = 1
canvas.segment_at((125, 125, 125), # zyx
dynamic_image=inference.DynamicImage(),
vis_update_every=vis_update)
Explanation: Run inference
End of explanation |
15,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
Step2: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
Step3: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
Step4: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
Step5: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples | Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
x = np.random.randn(500, 500) +10
#print x[:10,:10]<0.3
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print 'Running tests with p = ', p
print 'Mean of input: ', x.mean()
print 'Mean of train-time output: ', out.mean()
print 'Mean of test-time output: ', out_test.mean()
print 'Fraction of train-time output set to zero: ', (out == 0).mean()
print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()
print
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
x = np.random.randn(10, 10) +10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print 'dx relative error: ', rel_error(dx, dx_num)
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print 'Running check with dropout = ', dropout
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
print
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
# Train two identical nets, one with dropout and one without
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print dropout
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation |
15,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3 - Strings
This notebook uses code snippets and explanations from this course.
In this notebook, we will focus on the datatype strings. The first thing you learned was printing a simple sentence
Step1: There is no difference in declaring a string with single or double quotes. However, if your string contains a quote symbol it can lead to errors if you try to enclose it with the same quotes.
Step2: In the example above the error indicates that there is something wrong with the letter s. This is because the single quote closes the string we started, and anything after that is unexpected.
To solve this we can enclose the string in double quotes, as follows
Step3: We can also use the escape character "\" in front of the quote, which will tell Python not to treat this specific quote as the end of the string.
Step4: 1.1 Multi-line strings
Strings in Python can also span across multiple lines, which can be useful for when you have a very long string, or when you want to format the output of the string in a certain way. This can be achieved in two ways
Step5: The \n or newline symbol indicates that we want to start the rest of the text on a new line in the string, the following \ indicates that we want the string to continue on the next line of the code. This difference can be quite hard to understand, but best illustrated with an example where we do not include the \n symbol.
Step7: As you can see, Python now interprets this example as a single line of text. If we use the recommended way in Python to write multiline strings, with triple double or single quotes, you will see that the \n or newline symbol is automatically included.
Step8: What will happen if you remove the backslash characters in the example? Try it out in the cell below.
Step10: 1.2 Internal representation
Step11: Internally, these strings are equally represented. We can check that with the double equals sign, which checks if two objects are the same
Step12: So from this we can conclude that multiline_text_1 has the same hidden characters (in this case \n, which stands for 'new line') as multiline_text_2. You can show that this is indeed true by using the built-in repr() function (which gives you the Python-internal representation of an object).
Step13: Another hidden character that is often used is \t, which represents tabs
Step14: 2. Strings as sequences
2.1 String indices
Strings are simply sequences of characters. Each character in a string therefore has a position, which can be referred to by the index number of the position. The index numbers start at 0 and then increase to the length of the string. You can also start counting backwards using negative indices. The following table shows all characters of the sentence "Sandwiches are yummy" in the first row. The second row and the third row show respectively the positive and negative indices for each character
Step15: Length
Step16: 2.2 Slicing and indices applied to strings
Besides using single indices we can also extract a range from a string
Step17: This is called string slicing. So how does this notation work?
python
my_string[i] # Get the character at index i.
my_string[start
Step18: 3. Immutability
The mutability of an object refers to whether an object can change or not. Strings are immutable, meaning that they cannot be changed. It is possible to create a new string-object based on the old one, but we cannot modify the existing string-object. The cells below demonstrate this.
Step19: The reasons for why strings are immutable are beyond the scope of this notebook. Just remember that if you want to modify a string, you need to overwrite the entire string, and you cannot modify parts of it by using individual indices.
4. Comparing strings
In Python it is possible to use comparison operators (as used in conditional statements) on strings. These operators are
Step20: Another way of comparing strings is to check whether a string is part of another string, which can be done using the in operator. It returns True if the string contains the relevant substring, and False if it doesn't. These two values (True and False) are called boolean values, or booleans for short. We'll talk about them in more detail later. Here are some examples to try (can you predict what will happen before running them?)
Step21: 5. Printing, concatenating and inserting strings
You will often find yourself concatenating and printing combinations of strings. Consider the following examples
Step22: Even though they may look similar, there are two different things happening here. Simply said
Step23: String concatenation, on the other hand, happens when we merge two strings into a single object using the + operator. No single blanks are inserted, and you cannot concatenate mix types. So, if you want to merge a string and an integer, you will need to convert the integer to a string.
Step24: Optionally, we can assign the concatenated string to a variable
Step25: In addition to using + to concatenate strings, we can also use the multiplication sign * in combination with an integer for repeating strings (note that we again need to add a blank after 'apples' if we want it to be inserted)
Step26: The difference between "," and "+" when printing and concatenating strings can be confusing at first. Have a look at these examples to get a better sense of their differences.
Step27: 5.1 Using f-strings
We can imagine that string concatenation can get rather confusing and unreadable if we have more variables. Consider the following example
Step28: Luckily, there is a way to make the code a lot more easy to understand and nicely formatted. In Python, you can use a
string formatting mechanism called Literal String Interpolation. Strings that are formatted using this mechanism are called f-strings, after the leading character used to denote such strings, and standing for "formatted strings". It works as follows
Step29: We can even do cool stuff like this with f-strings
Step30: Other formatting methods that you may come across include %-formatting and str.format(), but we recommend that you use f-strings because they are the most intuitive.
Using f-strings can be extremely useful if you're dealing with a lot of data you want to modify in a similar way. Suppose you want to create many new files containing data and name them according to a specific system. You can create a kind of template name and then fill in specific information using variables. (More about files later.)
6. String methods
A method is a function that is associated with an object. For example, the string-method lower() turns a string into all lowercase characters, and the string method upper() makes strings uppercase. You can call this method using the dot-notation as shown below
Step31: 6.1 Learning about methods
So how do you find out what kind of methods an object has? There are two options
Step32: If you'd like to know what one of these methods does, you can just use help() (or look it up online)
Step33: It's important to note that string methods only return the result. They do not change the string itself.
Step34: Below we illustrate some of the string methods. Try to understand what is happening. Use the help() function to find more information about each of these methods.
Step35: Exercises
Exercise 1
Step36: Exercise 2
Step37: Can you print the following? Try using both positive and negative indices.
make a new string containing your first name and print its first letter
print the number of letters in your name
Step38: Exercise 3
Step39: Can you print 'banana' in reverse ('ananab')?
Step40: Can you exchange the first and last characters in my_string ('aananb')? Create a new variable new_string to store your result.
Step41: Exercise 4
Step42: How would you print the same sentence using ","?
Step43: Can you rewrite the code below using an f-string?
Step44: Exercise 5
Step45: Remove all spaces in the sentence using a string method.
Step46: What do the methods lstrip() and rstrip() do? Try them out below.
Step47: What do the methods startswith() and endswith() do? Try them out below. | Python Code:
# Here are some strings:
string_1 = "Hello, world!"
string_2 = 'I ❤️ cheese' # If you are using Python 2, your computer will not like this.
string_3 = '1,2,3,4,5,6,7,8,9'
Explanation: Chapter 3 - Strings
This notebook uses code snippets and explanations from this course.
In this notebook, we will focus on the datatype strings. The first thing you learned was printing a simple sentence: "Hello, world!" This sentence, as any other text, was stored by Python as a string. Here are some reasons why strings are important:
Text is usually represented as a string. Text analysis is the ofcus of our course, so we will be dealing with strings a lot.
Strings are also used when reading in files: We tell python which file to open by giving it a filepath, e.g. '../Data/books/HuckFinn.txt'. Don't worry about this for now, we will explain it in block 3
At the end of this chapter, you will be able to:
define strings and understand their internal representation
understand strings as sequences
use character indices for string slicing
combine strings through printing, concatenation and insertion
compare strings using comparison operators and the in operator
understand strings as immutable objects
work with and understand string methods
understand the difference between args and kwargs
If you want to learn more about these topics, you might find the following links useful:
Documentation: String methods
Documentation: Literal String Interpolation (f-strings)
Explanation: Strings
Explanation: F-strings
Video: Strings - working with text data
Video: Strings
Video: String Indexing and Slicing
If you have questions about this chapter, please contact us ([email protected]).
1. Defining and representing strings
A string is a sequence of letters/characters which together form a whole (for instance a word, sentence or entire text). In Python, a string is a type of object for which the value is enclosed by single or double quotes. Let's define a few of them:
End of explanation
# Run this cell to see the error generated by the following line.
restaurant = 'Wendy's'
Explanation: There is no difference in declaring a string with single or double quotes. However, if your string contains a quote symbol it can lead to errors if you try to enclose it with the same quotes.
End of explanation
restaurant = "Wendy's"
# Similarly, we can enclose a string containing double quotes with single quotes:
quotes = 'Using "double" quotes enclosed by a single quote.'
Explanation: In the example above the error indicates that there is something wrong with the letter s. This is because the single quote closes the string we started, and anything after that is unexpected.
To solve this we can enclose the string in double quotes, as follows:
End of explanation
restaurant = 'Wendy\'s'
print(restaurant)
restaurant = "Wendy\"s"
print(restaurant)
Explanation: We can also use the escape character "\" in front of the quote, which will tell Python not to treat this specific quote as the end of the string.
End of explanation
# This example also works with single-quotes.
long_string = "A very long string\n\
can be split into multiple\n\
sentences by appending a newline symbol\n\
to the end of the line."
print(long_string)
Explanation: 1.1 Multi-line strings
Strings in Python can also span across multiple lines, which can be useful for when you have a very long string, or when you want to format the output of the string in a certain way. This can be achieved in two ways:
With single or double quotes, where we manually indicate that the rest of the string continues on the next line with a backslash.
With three single or double quotes.
We will first demonstrate how this would work when you use one double or single quote.
End of explanation
long_string = "A very long string \
can be split into multiple \
sentences by appending a backslash \
to the end of the line."
print(long_string)
Explanation: The \n or newline symbol indicates that we want to start the rest of the text on a new line in the string, the following \ indicates that we want the string to continue on the next line of the code. This difference can be quite hard to understand, but best illustrated with an example where we do not include the \n symbol.
End of explanation
long_string = A very long string
can also be split into multiple
sentences by enclosing the string
with three double or single quotes.
print(long_string)
print()
another_long_string = '''A very long string
can also be split into multiple
sentences by enclosing the string
with three double or single quotes.'''
print(another_long_string)
Explanation: As you can see, Python now interprets this example as a single line of text. If we use the recommended way in Python to write multiline strings, with triple double or single quotes, you will see that the \n or newline symbol is automatically included.
End of explanation
long_string = "A very long string\
can be split into multiple\
sentences by appending a backslash\
to the end of the line."
print(long_string)
Explanation: What will happen if you remove the backslash characters in the example? Try it out in the cell below.
End of explanation
multiline_text_1 = This is a multiline text, so it is enclosed by triple quotes.
Pretty cool stuff!
I always wanted to type more than one line, so today is my lucky day!
multiline_text_2 = "This is a multiline text, so it is enclosed by triple quotes.\nPretty cool stuff!\nI always wanted to type more than one line, so today is my lucky day!"
print(multiline_text_1)
print() # this just prints an empty line
print(multiline_text_2)
Explanation: 1.2 Internal representation: using repr()
As we have seen above, it is possible to make strings that span multiple lines. Here are two ways to do so:
End of explanation
print(multiline_text_1 == multiline_text_2)
Explanation: Internally, these strings are equally represented. We can check that with the double equals sign, which checks if two objects are the same:
End of explanation
# Show the internal representation of multiline_text_1.
print(repr(multiline_text_1))
print(repr(multiline_text_2))
Explanation: So from this we can conclude that multiline_text_1 has the same hidden characters (in this case \n, which stands for 'new line') as multiline_text_2. You can show that this is indeed true by using the built-in repr() function (which gives you the Python-internal representation of an object).
End of explanation
colors = "yellow\tgreen\tblue\tred"
print(colors)
print(repr(colors))
Explanation: Another hidden character that is often used is \t, which represents tabs:
End of explanation
my_string = "Sandwiches are yummy"
print(my_string[1])
print(my_string[-1])
Explanation: 2. Strings as sequences
2.1 String indices
Strings are simply sequences of characters. Each character in a string therefore has a position, which can be referred to by the index number of the position. The index numbers start at 0 and then increase to the length of the string. You can also start counting backwards using negative indices. The following table shows all characters of the sentence "Sandwiches are yummy" in the first row. The second row and the third row show respectively the positive and negative indices for each character:
| Characters | S | a | n | d | w | i | c | h | e | s | | a | r | e | | y | u | m | m | y |
|----------------|---|---|---|---|---|---|---|---|---|---|----|----|----|----|----|----|----|----|----|----|
| Positive index | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 |
| Negative index |-20|-19|-18|-17|-16|-15|-14|-13|-12|-11|-10|-9|-8|-7|-6|-5|-4|-3|-2|-1|
You can access the characters of a string as follows:
End of explanation
number_of_characters = len(my_string)
print(number_of_characters) # Note that spaces count as characters too!
Explanation: Length: Python has a built-in function called len() that lets you compute the length of a sequence. It works like this:
End of explanation
my_string = "Sandwiches are yummy"
print(my_string[1:4])
Explanation: 2.2 Slicing and indices applied to strings
Besides using single indices we can also extract a range from a string:
End of explanation
print(my_string[1:4])
print(my_string[1:4:1])
print(my_string[11:14])
print(my_string[15:])
print(my_string[:9])
print('cow'[::2])
print('cow'[::-2])
# a fun trick to reverse sequences:
print(my_string[::-1])
# You can do something similar with lists (you don't have to understand this is detail now - but we'll show you an
# example already, so you've seen it):
my_list = ['a', 'bunch', 'of', 'words']
print(my_list[3])
print(my_list[2:4])
print(my_list[-1])
Explanation: This is called string slicing. So how does this notation work?
python
my_string[i] # Get the character at index i.
my_string[start:end] # Get the substring starting at 'start' and ending *before* 'end'.
my_string[start:end:stepsize] # Get all characters starting from 'start', ending before 'end',
# with a specific step size.
You can also leave parts out:
python
my_string[:i] # Get the substring starting at index 0 and ending just before i.
my_string[i:] # Get the substring starting at i and running all the way to the end.
my_string[::i] # Get a string going from start to end with step size i.
You can also have negative step size. my_string[::-1] is the idiomatic way to reverse a string.
Tip: Slicing and accessing values via indices is very useful and can be applied to other python objects, which have a fixed sequence, such as lists (we will see how in the subsequent notebooks). Try to understand what is going on with string slicing - it will be very helpful in rest of the course!
Do you know what the following statements will print?
End of explanation
# This is fine, because we are creating a new string. The old one remains unchanged:
fruit = 'guanabana'
island = fruit[:5]
print(island, 'island')
print(fruit, 'fruit')
# This works because we are creating a new string and overwriting our old one
fruit = fruit[5:] + 'na'
print(fruit)
# This attempt to change the ending into `aan' does not work because now we are trying to change an existing string
fruit[4:5] = 'an'
print(fruit)
# We could do this with a list though (don't worry about this yet - it is just meant to show the contrast)
fruits = ['cherry', 'blueberry', 'banana']
fruits[2:3] = ['rasperry', 'kiwi']
fruits
# If we want to modify a string by exchanging characters, we need to do:
fruit = fruit[:4] + 'an'
print(fruit)
Explanation: 3. Immutability
The mutability of an object refers to whether an object can change or not. Strings are immutable, meaning that they cannot be changed. It is possible to create a new string-object based on the old one, but we cannot modify the existing string-object. The cells below demonstrate this.
End of explanation
print('a' == 'a')
print('a' != 'b')
print('a' == 'A') # string comparison is case-sensitive
print('a' < 'b') # alphabetical order
print('A' < 'a') # uppercase comes before lowercase
print('B' < 'a') # uppercase comes before lowercase
print()
print('orange' == 'Orange')
print('orange' > 'Orange')
print('orange' < 'Orange')
print('orange' > 'banana')
print('Orange' > 'banana')
Explanation: The reasons for why strings are immutable are beyond the scope of this notebook. Just remember that if you want to modify a string, you need to overwrite the entire string, and you cannot modify parts of it by using individual indices.
4. Comparing strings
In Python it is possible to use comparison operators (as used in conditional statements) on strings. These operators are:
== ('is the same as')
!= ('is not the same as')
< ('is smaller than')
<= ('is the same as or smaller than')
> ('is greater than')
>= ('is the same as or greater than')
Attention
'=' is used to assign a value to a variable whereas '==' is used to compare two values. If you get errors in comparisons, check if you used the correct operator.
Some of these symbols are probably familiar to you from your math classes. Most likely, you have used them before to compare numbers. However, we can also use them to compare strings!
There are a number of things we have to know about python when comparing strings:
String comparison is always case-sensitive
Internally, characters are represented as numerical values, which can be ranked. You can use the smaller than/greater than operators to put words in lexicographical order. This is similar to the alphabetical order you would use with a dictionary, except that all the uppercase letters come before all the lowercase letters (so first A, B, C, etc. and then a, b, c, etc.)
Hint: In practice, you will often use == and !=. The 'greater than' and 'smaller than' operators are used in sorting algorithms (e.g. to sort a list of strings in alphabetical order), but you will hardly ever use them directly to compare strings.
End of explanation
"fun" in "function"
"I" in "Team"
"am" in "Team"
"App" in "apple" # Capitals are not the same as lowercase characters!
"apple" in "apple"
"applepie" in "apple"
Explanation: Another way of comparing strings is to check whether a string is part of another string, which can be done using the in operator. It returns True if the string contains the relevant substring, and False if it doesn't. These two values (True and False) are called boolean values, or booleans for short. We'll talk about them in more detail later. Here are some examples to try (can you predict what will happen before running them?):
End of explanation
print("Hello", "World")
print("Hello " + "World")
Explanation: 5. Printing, concatenating and inserting strings
You will often find yourself concatenating and printing combinations of strings. Consider the following examples:
End of explanation
number = 5
print("I have", number, "apples")
Explanation: Even though they may look similar, there are two different things happening here. Simply said: the plus in the expression is doing concatenation, but the comma is not doing concatenation.
The 'print()' function, which we have seen many times now, will print as strings everything in a comma-separated sequence of expressions to your screen, and it will separate the results with single blanks by default. Note that you can mix types: anything that is not already a string is automatically converted to its string representation.
End of explanation
number = 5
print("I have " + str(number) + " apples")
Explanation: String concatenation, on the other hand, happens when we merge two strings into a single object using the + operator. No single blanks are inserted, and you cannot concatenate mix types. So, if you want to merge a string and an integer, you will need to convert the integer to a string.
End of explanation
my_string = "I have " + str(number) + " apples"
print(my_string)
Explanation: Optionally, we can assign the concatenated string to a variable:
End of explanation
my_string = "apples " * 5
print(my_string)
Explanation: In addition to using + to concatenate strings, we can also use the multiplication sign * in combination with an integer for repeating strings (note that we again need to add a blank after 'apples' if we want it to be inserted):
End of explanation
print("Hello", "World")
print("Hello" + "World")
print("Hello " + "World")
print(5, "eggs")
print(str(5), "eggs")
print(5 + " eggs")
print(str(5) + " eggs")
text = "Hello" + "World"
print(text)
print(type(text))
text = "Hello", "World"
print(text)
print(type(text))
Explanation: The difference between "," and "+" when printing and concatenating strings can be confusing at first. Have a look at these examples to get a better sense of their differences.
End of explanation
name = "Pia"
age = 26
country = "Austria"
residence = "The Netherlands"
introduction = "Hello. My name is " + name + ". I'm " + str(age) + " years old and I'm from " + country + \
", but I live in "+ residence +'.'
print(introduction)
Explanation: 5.1 Using f-strings
We can imagine that string concatenation can get rather confusing and unreadable if we have more variables. Consider the following example:
End of explanation
name="Pia"
age=26
country="Austria"
residence = "The Netherlands"
introduction = f"Hello. My name is {name}. I'm {age} years old and I'm from {country}, but I live in {residence}."
introduction
Explanation: Luckily, there is a way to make the code a lot more easy to understand and nicely formatted. In Python, you can use a
string formatting mechanism called Literal String Interpolation. Strings that are formatted using this mechanism are called f-strings, after the leading character used to denote such strings, and standing for "formatted strings". It works as follows:
End of explanation
text = f"Soon, I'm turning {age+1} years."
print(text)
Explanation: We can even do cool stuff like this with f-strings:
End of explanation
string_1 = 'Hello, world!'
print(string_1) # The original string.
print(string_1.lower()) # Lowercased.
print(string_1.upper())# Uppercased.
Explanation: Other formatting methods that you may come across include %-formatting and str.format(), but we recommend that you use f-strings because they are the most intuitive.
Using f-strings can be extremely useful if you're dealing with a lot of data you want to modify in a similar way. Suppose you want to create many new files containing data and name them according to a specific system. You can create a kind of template name and then fill in specific information using variables. (More about files later.)
6. String methods
A method is a function that is associated with an object. For example, the string-method lower() turns a string into all lowercase characters, and the string method upper() makes strings uppercase. You can call this method using the dot-notation as shown below:
End of explanation
# Run this cell to see all methods for strings
dir(str)
Explanation: 6.1 Learning about methods
So how do you find out what kind of methods an object has? There are two options:
Read the documentation. See here for the string methods.
Use the dir() function, which returns a list of method names (as well as attributes of the object). If you want to know what a specific method does, use the help() function.
Run the code below to see what the output of dir() looks like.
The method names that start and end with double underscores ('dunder methods') are Python-internal. They are what makes general methods like len() work (len() internally calls the string.__len__() function), and cause Python to know what to do when you, for example, use a for-loop with a string.
The other method names indicate common and useful methods.
End of explanation
help(str.upper)
Explanation: If you'd like to know what one of these methods does, you can just use help() (or look it up online):
End of explanation
x = 'test' # Defining x.
y = x.upper() # Using x.upper(), assigning the result to variable y.
print(y) # Print y.
print(x) # Print x. It is unchanged.
Explanation: It's important to note that string methods only return the result. They do not change the string itself.
End of explanation
# Find out more about each of the methods used below by changing the name of the method
help(str.strip)
s = ' Humpty Dumpty sat on the wall '
print(s)
s = s.strip()
print(s)
print(s.upper())
print(s.lower())
print(s.count("u"))
print(s.count("U"))
print(s.find('sat'))
print(s.find('t', 12))
print(s.find('q', 12))
print(s.replace('sat on', 'fell off'))
words = s.split() # This returns a list, which we will talk about later.
for word in words: # But you can iterate over each word in this manner
print(word.capitalize())
print('-'.join(words))
Explanation: Below we illustrate some of the string methods. Try to understand what is happening. Use the help() function to find more information about each of these methods.
End of explanation
print("A message").
print("A message')
print('A message"')
Explanation: Exercises
Exercise 1:
Can you identify and explain the errors in the following lines of code? Correct them please!
End of explanation
my_string = "Sandwiches are yummy"
# your code here
Explanation: Exercise 2:
Can you print the following? Try using both positive and negative indices.
the letter 'd' in my_string
the letter 'c' in my_string
End of explanation
# your code here
Explanation: Can you print the following? Try using both positive and negative indices.
make a new string containing your first name and print its first letter
print the number of letters in your name
End of explanation
# your code here
Explanation: Exercise 3:
Can you print all a's in the word 'banana'?
End of explanation
# your code here
Explanation: Can you print 'banana' in reverse ('ananab')?
End of explanation
my_string = "banana"
new_string = # your code here
Explanation: Can you exchange the first and last characters in my_string ('aananb')? Create a new variable new_string to store your result.
End of explanation
name = "Bruce Banner"
alterego = "The Hulk"
colour = "Green"
country = "USA"
print("His name is" + name + "and his alter ego is" + alterego +
", a big" + colour + "superhero from the" + country + ".")
Explanation: Exercise 4:
Find a way to fix the spacing problem below keeping the "+".
End of explanation
name = "Bruce Banner"
alterego = "The Hulk"
colour = "Green"
country = "USA"
print("His name is" + name + "and his alter ego is" + alterego +
", a big" + colour + "superhero from the" + country + ".")
Explanation: How would you print the same sentence using ","?
End of explanation
name = "Bruce Banner"
alterego = "The Hulk"
colour = "green"
country = "the USA"
birth_year = 1969
current_year = 2017
print("His name is " + name + " and his alter ego is " + alterego +
", a big " + colour + " superhero from " + country + ". He was born in " + str(birth_year) +
", so he must be " + str(current_year - birth_year - 1) + " or " + str(current_year - birth_year) +
" years old now.")
Explanation: Can you rewrite the code below using an f-string?
End of explanation
my_string = "banana"
# your code here
Explanation: Exercise 5:
Replace all a's by o's in 'banana' using a string method.
End of explanation
my_string = "Humpty Dumpty sat on the wall"
# your code here
Explanation: Remove all spaces in the sentence using a string method.
End of explanation
# find out what lstrip() and rstrip() do
Explanation: What do the methods lstrip() and rstrip() do? Try them out below.
End of explanation
# find out what startswith() and endswith() do
Explanation: What do the methods startswith() and endswith() do? Try them out below.
End of explanation |
15,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align
Step1: ¿Impresionado? La primera vez que hemos llamado a la función, Python ha generado el código correspondiente al tipo de datos que le hemos pasado. Podemos verlo aquí
Step2: E imprimir el código generado así
Step3: Entendiendo numba
Step4: Y copiemos directamente la función original
Step5: Paso 0
Step6: Parece que está habiendo demasiadas llamadas a list.append, aunque representan un porcentaje pequeño del tiempo de ejecución.
Para instalar line_profiler
Step7: Paso 1
Step8: Tenemos que más de un 10 % de los elementos de la matriz cumplen la condición de ser «mínimos locales», así que no es nada despreciable. Esto en nuestro ejemplo hace un total de más de 400 000 elementos
Step9: En lugar de esto, lo que vamos a hacer va a ser crear otro array, de la misma forma que nuestros datos, y almacenar un valor True en aquellos elementos que cumplan la condición de mínimo local. De esta forma cumplimos también una de las reglas de oro de Software Carpentry
Step10: Encima puedo aprovechar la estupenda función nonzero de NumPy. Compruebo que las salidas son iguales
Step11: Y evalúo el rendimiento de la nueva función
Step12: Como era de esperar, los tiempos son parecidos, porque no he optimizado el cuello de botella que son las comprobaciones de los arrays. Al menos, ya no tenemos dos objetos en memoria que van a crecer de manera aleatoria
Step13: ¿Qué pasa si hacemos lo mismo con la versión que no utiliza listas?
Step14: Obtenemos un error porque numba no reconoce la función np.zeros_like con los argumentos que le hemos pasado. Si acudimos a la documentación http
Step15: Lo hemos conseguido
Step17: La atmósfera estándar
El cálculo de propiedades termodinámicas de la atmósfera estándar es un problema clásico que todo aeronáutico ha afrontado alguna vez muy al principio de su carrera formativa. La teoría es simple
Step19: Solución de Navier de una placa plana
Implementar y representar gráficamente la solución de Navier para calcular la deflexión de una placa rectangular, simplemente apoyada en sus cuatro bordes (es decir, los bordes pueden girar | Python Code:
import numpy as np
from numba import njit
arr2d = np.arange(20 * 30, dtype=float).reshape(20,30)
%%timeit
np.sum(arr2d)
def py_sum(arr):
M, N = arr.shape
sum = 0.0
for i in range(M):
for j in range(N):
sum += arr[i,j]
return sum
%%timeit
py_sum(arr2d)
fast_sum = njit(py_sum)
%%timeit -n1 -r1
fast_sum(arr2d)
%%timeit
fast_sum(arr2d)
Explanation: <table width="100%" border="0">
<tr>
<td><img src="./images/ing.png" alt="" align="left" /></td>
<td><img src="./images/ucv.png" alt="" align="center" height="100" width="100" /></td>
<td><img src="./images/mec.png" alt="" align="right"/></td>
</tr>
</table>
<br>
<h1 style="text-align: center;"> Curso de Python para Ingenieros Mecánicos </h1>
<h3 style="text-align: center;"> Por: Eduardo Vieira</h3>
<br>
<br>
<h1 style="text-align: center;"> Numba - Acelerar código Python </h1>
<br>
Cómo acelerar Python usando numba
_En ocasiones nos encontraremos con algoritmos que no serán fácilmente vectorizables o expresables en operaciones sobre arrays de NumPy, y sufriremos los problemas de rendimiento de Python. En este notebook vamos a hacer un repaso exhaustivo de cómo acelerar sustancialmente nuestro código Python usando numba. Esta clase está basada en el artículo http://pybonacci.org/2015/03/13/como-acelerar-tu-codigo-python-con-numba/ _
¿Qué es numba?
numba es un compilador JIT (just-in-time) de Python que genera código máquina para CPU o GPU utilizando la infraestructura LLVM especializado en aplicaciones numéricas. Vamos a ver un ejemplo muy básico de cómo funciona:
End of explanation
fast_sum.signatures
Explanation: ¿Impresionado? La primera vez que hemos llamado a la función, Python ha generado el código correspondiente al tipo de datos que le hemos pasado. Podemos verlo aquí:
End of explanation
fast_sum.inspect_types()
Explanation: E imprimir el código generado así:
End of explanation
data = np.random.randn(2000, 2000)
Explanation: Entendiendo numba: el modo nopython
Como podemos leer en la documentación, numba tiene dos modos de funcionamiento básicos: el modo object y el modo nopython.
El modo object genera código que gestiona todas las variables como objetos de Python y utiliza la API C de Python para operar con ellas. En general en este modo no habrá ganancias de rendimiento (e incluso puede ir más lento), con lo cual mi recomendación personal es directamente no utilizarlo. Hay casos en los que numba puede detectar los bucles y optimizarlos individualmente (loop-jitting), pero no le voy a prestar mucha atención a esto.
El modo nopython genera código independiente de la API C de Python. Esto tiene la desventaja de que no podemos usar todas las características del lenguaje, pero tiene un efecto significativo en el rendimiento. Otra de las restricciones es que no se puede reservar memoria para objetos nuevos.
Por defecto numba usa el modo nopython siempre que puede, y si no pasa a modo object. Nosotros vamos a forzar el modo nopython (o «modo estricto» como me gusta llamarlo) porque es la única forma de aprovechar el potencial de numba.
Ámbito de aplicación
El problema del modo nopython es que los mensajes de error son totalmente inservibles en la mayoría de los casos, así que antes de lanzarnos a compilar funciones con numba conviene hacer un repaso de qué no podemos hacer para anticipar la mejor forma de programar nuestro código. Podéis consultar en la documentación el subconjunto de Python soportado por numba en modo nopython, y ya os aviso que, al menos de momento, no tenemos list comprehensions, delegación de generadores ni algunas cosas más. Permitidme que resalte una frase sacada de la página principal de numba:
"With a few annotations, array-oriented and math-heavy Python code can be just-in-time compiled to native machine instructions, similar in performance to C, C++ and Fortran". [Énfasis mío]
Siento decepcionar a la audiencia pero numba no acelerará todo el código Python que le echemos: está enfocado a operaciones matemáticas con arrays. Aclarado este punto, vamos a ponernos manos a la obra con un ejemplo aplicado :)
Acelerando una función con numba
Vamos a intentar acelerar la siguiente función, tomada del artículo http://pybonacci.org/2015/03/09/c-elemental-querido-cython/:
"Por ejemplo, imaginemos que tenemos que detectar valores mínimos locales dentro de una malla. Los valores mínimos deberán ser simplemente valores más bajos que los que haya en los 8 nodos de su entorno inmediato. En el siguiente gráfico, el nodo en verde será un nodo con un mínimo y en su entorno son todo valores superiores:
<table>
<tr>
<td style="background:red">(2, 0)</td>
<td style="background:red">(2, 1)</td>
<td style="background:red">(2, 2)</td>
</tr>
<tr>
<td style="background:red">(1, 0)</td>
<td style="background:green">(1. 1)</td>
<td style="background:red">(1, 2)</td>
</tr>
<tr>
<td style="background:red">(0, 0)</td>
<td style="background:red">(0, 1)</td>
<td style="background:red">(0, 2)</td>
</tr>
</table>
Creamos nuestro array de datos:
End of explanation
def busca_min(malla):
minimosx = []
minimosy = []
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimosx.append(i)
minimosy.append(j)
return np.array(minimosx), np.array(minimosy)
busca_min(data)
Explanation: Y copiemos directamente la función original:
End of explanation
%%timeit
busca_min(data)
stats = %prun -s cumtime -rq busca_min(data)
stats.print_stats()
Explanation: Paso 0: Analizar el rendimiento
Guía sobre cómo analizar el rendimiento en Python: https://www.huyng.com/posts/python-performance-analysis
End of explanation
%load_ext line_profiler
stats = %lprun -f busca_min -r busca_min(data)
stats.print_stats()
Explanation: Parece que está habiendo demasiadas llamadas a list.append, aunque representan un porcentaje pequeño del tiempo de ejecución.
Para instalar line_profiler:
conda install -c anaconda line_profiler
End of explanation
mx, my = busca_min(data)
mx.size / data.size
Explanation: Paso 1: Mejorar el algoritmo
Hacer append a esas dos listas tantas veces no parece una buena idea. De hecho, se puede comprobar que se hace para un porcentaje significativo de los elementos:
End of explanation
mx.size
Explanation: Tenemos que más de un 10 % de los elementos de la matriz cumplen la condición de ser «mínimos locales», así que no es nada despreciable. Esto en nuestro ejemplo hace un total de más de 400 000 elementos:
End of explanation
def busca_min_np(malla):
minimos = np.zeros_like(malla, dtype=bool)
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimos[i, j] = True
return np.nonzero(minimos)
Explanation: En lugar de esto, lo que vamos a hacer va a ser crear otro array, de la misma forma que nuestros datos, y almacenar un valor True en aquellos elementos que cumplan la condición de mínimo local. De esta forma cumplimos también una de las reglas de oro de Software Carpentry: "Always initialize from data".
End of explanation
np.testing.assert_array_equal(busca_min(data)[0], busca_min_np(data)[0])
np.testing.assert_array_equal(busca_min(data)[1], busca_min_np(data)[1])
Explanation: Encima puedo aprovechar la estupenda función nonzero de NumPy. Compruebo que las salidas son iguales:
End of explanation
%timeit busca_min_np(data)
Explanation: Y evalúo el rendimiento de la nueva función:
End of explanation
busca_min_jit = njit(busca_min)
busca_min_jit(data)
%timeit busca_min_jit(data)
Explanation: Como era de esperar, los tiempos son parecidos, porque no he optimizado el cuello de botella que son las comprobaciones de los arrays. Al menos, ya no tenemos dos objetos en memoria que van a crecer de manera aleatoria: ya podemos utilizar numba.
Paso 2: Aplicar numba.jit(nopython=True)
Como hemos dicho antes, vamos a forzar que numba funcione en modo nopython para garantizar que obtenemos una mejora en el rendimiento. Si intentamos compilar la primera función, ya vamos a ver una ganancia de rendimiento sustancial:
End of explanation
busca_min_np_jit = njit(busca_min_np)
busca_min_np_jit(data)
Explanation: ¿Qué pasa si hacemos lo mismo con la versión que no utiliza listas?
End of explanation
@njit
def busca_min_np2_jit(malla):
minimos = np.zeros_like(malla, np.bool_) # <-- Cambiar esta línea
for i in range(1, malla.shape[1]-1):
for j in range(1, malla.shape[0]-1):
if (malla[j, i] < malla[j-1, i-1] and
malla[j, i] < malla[j-1, i] and
malla[j, i] < malla[j-1, i+1] and
malla[j, i] < malla[j, i-1] and
malla[j, i] < malla[j, i+1] and
malla[j, i] < malla[j+1, i-1] and
malla[j, i] < malla[j+1, i] and
malla[j, i] < malla[j+1, i+1]):
minimos[i, j] = True
return np.nonzero(minimos)
busca_min_np2_jit(data)
%timeit busca_min_np2_jit(data)
Explanation: Obtenemos un error porque numba no reconoce la función np.zeros_like con los argumentos que le hemos pasado. Si acudimos a la documentación http://numba.pydata.org/numba-doc/0.31.0/reference/numpysupported.html#other-functions, vemos que hay que utilizar tipos de NumPy, en este caso np.bool_.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from numpy import sin, pi
Explanation: Lo hemos conseguido: 70x más rápido :)
Ejercicios
End of explanation
# Constants
R_a = 287.05287 # J/(Kg·K)
g0 = 9.80665 # m/s^2
T0 = 288.15 # K
p0 = 101325.0 # Pa
alpha = np.array([-6.5e-3, 0.0]) # K / m
# Computed constants
T1 = T0 + alpha[0] * 11000.0
p1 = p0 * (T0 / (T0 + alpha[0] * 11000.0)) ** (g0 / (R_a * alpha[0]))
def atm(h):
Standard atmosphere temperature, pressure and density.
Parameters
----------
h : array-like
Geopotential altitude, m.
h = np.atleast_1d(h).astype(float)
scalar = (h.size == 1)
assert len(h.shape) == 1
T = np.empty_like(h)
p = np.empty_like(h)
rho = np.empty_like(h)
# Actually compute the values
_atm(h, T, p, rho)
if scalar:
T = T[0]
p = p[0]
rho = rho[0]
return T, p, rho
@njit
def _atm(h, T, p, rho):
for ii in range(h.size):
if 0.0 <= h[ii] < 11000.0:
T[ii] = T0 + alpha[0] * h[ii]
p[ii] = p0 * (T0 / (T0 + alpha[0] * h[ii])) ** (g0 / (R_a * alpha[0]))
rho[ii] = p[ii] / (R_a * T[ii])
elif 11000.0 <= h[ii] <= 20000.0:
T[ii] = T1 # + alpha[1] * (h[ii] - 11000.0)
p[ii] = p1 * np.exp(-g0 * (h[ii] - 11000.0) / (R_a * T1))
rho[ii] = p[ii] / (R_a * T[ii])
# aeropython: preserve
h = np.linspace(0, 20000)
T, p, _ = atm(h)
fig, ax1 = plt.subplots()
l1, = ax1.plot(T - 273, h, color="C0")
ax1.set_xlabel("T (°C)")
ax2 = ax1.twiny()
l2, = ax2.plot(p, h, color="C1")
ax2.set_xlabel("p (Pa)")
ax1.legend((l1, l2), ["Temperature", "Pressure"], loc=0)
ax1.grid()
Explanation: La atmósfera estándar
El cálculo de propiedades termodinámicas de la atmósfera estándar es un problema clásico que todo aeronáutico ha afrontado alguna vez muy al principio de su carrera formativa. La teoría es simple: imponemos una ley de variación de la temperatura con la altura $T = T(h)$, la presión se obtiene por consideraciones hidrostáticas $p = p(T)$ y la densidad por la ecuación de los gases ideales $\rho = \rho(p, T)$. La particularidad de la atmósfera estándar es que imponemos que la variación de la temperatura con la altura es una función simplificada y definida a trozos, así que calcular temperatura, presión y densidad dada una altura se parece mucho a hacer esto:
$$T(h) = \begin{cases} T_0 + \alpha h & 0 <= h <= 11000 \ T(11000) & 11000 < h <= 20000 \end{cases}
\ ~\ T_0 = 288.16 K \
\alpha = -6.5 \cdot 10^{-3}~\text{K/m}$$
$$ \rho(h) = \begin{cases} \rho_0 \left( \frac{T}{T_0} \right)^{-\frac{g}{\alpha R} - 1} & 0 <= h <= 11000 \ \rho(11000)~e^{\frac{-g(z - 11000)}{R T}} & 11000 < h <= 20000 \end{cases} $$
$$\rho_0 = 1.225~\text{[SI]} \
R = 287~\text{[SI]}$$
$$p = \rho R_a T$$
python
if 0.0 <= h < 11000.0:
T = T0 + alpha * h
rho = ... # Algo que depende de T
p = rho * R_a * T
elif 11000.0 <= h < 20000.0:
T = T1
rho = ...
p = rho * R_a * T
El problema viene cuando se quiere vectorizar esta función y permitir que h pueda ser un array de alturas. Esto es muy conveniente cuando queremos pintar alguna propiedad con matplotlib, por ejemplo.
Se intuye que hay dos formas de hacer esto: utilizando funciones de NumPy o iterando por cada elemento del array.
End of explanation
@njit
def a_mn_point(P, a, b, xi, eta, mm, nn):
Navier series coefficient for concentrated load.
return 4 * P * sin(mm * pi * xi / a) * sin(nn * pi * eta / b) / (a * b)
@njit
def plate_displacement(xx, yy, ww, a, b, P, xi, eta, D, max_m, max_n):
max_i, max_j = ww.shape
for mm in range(1, max_m):
for nn in range(1, max_n):
for ii in range(max_i):
for jj in range(max_j):
a_mn = a_mn_point(P, a, b, xi, eta, mm, nn)
ww[ii, jj] += (a_mn / (mm**2 / a**2 + nn**2 / b**2)**2
* sin(mm * pi * xx[ii, jj] / a)
* sin(nn * pi * yy[ii, jj] / b)
/ (pi**4 * D))
# aeropython: preserve
# Plate geometry
a = 1.0 # m
b = 1.0 # m
h = 50e-3 # m
# Material properties
E = 69e9 # Pa
nu = 0.35
# Series terms
max_m = 16
max_n = 16
# Computation points
# NOTE: With an odd number of points the center of the place is included in
# the grid
NUM_POINTS = 101
# Load
P = 10e3 # N
xi = 3 * a / 4
eta = a / 2
# Flexural rigidity
D = h**3 * E / (12 * (1 - nu**2))
# ---
# Set up domain
x = np.linspace(0, a, num=NUM_POINTS)
y = np.linspace(0, b, num=NUM_POINTS)
xx, yy = np.meshgrid(x, y)
# Compute displacement field
ww = np.zeros_like(xx)
plate_displacement(xx, yy, ww, a, b, P, xi, eta, D, max_m, max_n)
# Print maximum displacement
w_max = abs(ww).max()
print("Maximum displacement = %14.12f mm" % (w_max * 1e3))
print("alpha = %7.5f" % (w_max / (P * a**2 / D)))
print("alpha * P a^2 / D = %6.4f mm" % (0.01160 * P * a**2 / D * 1e3))
plt.contourf(xx, yy, ww)
plt.colorbar()
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = './css/aeropython.css'
HTML(open(css_file, "r").read())
Explanation: Solución de Navier de una placa plana
Implementar y representar gráficamente la solución de Navier para calcular la deflexión de una placa rectangular, simplemente apoyada en sus cuatro bordes (es decir, los bordes pueden girar: no están empotrados) sometida a una carga transversal. La expresión matemática es:
$$w(x,y) = \sum_{m=1}^\infty \sum_{n=1}^\infty \frac{a_{mn}}{\pi^4 D}\,\left(\frac{m^2}{a^2}+\frac{n^2}{b^2}\right)^{-2}\,\sin\frac{m \pi x}{a}\sin\frac{n \pi y}{b}$$
siendo $a_{mn}$ los coeficientes de Fourier de la carga aplicada.
Para cada punto $(x, y)$ hay que hacer una doble suma en serie; si además queremos evaluar esto en un meshgrid, necesitamos un cuádruple bucle.
End of explanation |
15,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parse Json
Step1: 載入原始 RAW Data
Step2: 利用 LXML Parser 來分析文章結構
Step3: 取出 Image Src 的列表
Step4: 統計 Image Src 的列表
Step5: 請使用 reduceByKey , sortBy 來計算出 img src 排行榜
請參照以下文件
[http | Python Code:
def parseRaw(json_map):
url = json_map['url']
content = json_map['html']
return (url,content)
Explanation: Parse Json
End of explanation
import json
import pprint
pp = pprint.PrettyPrinter(indent=2)
path = "./pixnet.txt"
all_content = sc.textFile(path).map(json.loads).map(parseRaw)
Explanation: 載入原始 RAW Data
End of explanation
def parseImgSrc(x):
try:
urls = list()
import lxml.html
from urlparse import urlparse
root = lxml.html.fromstring(x)
t = root.getroottree()
for src in root.xpath('//img/@src'):
try :
host = urlparse(src).netloc
if '.' not in host : continue
if host.count('.') == 1 :
pass
else:
host = host[host.index('.')+1:]
urls.append('imgsrc_'+host)
except :
print "Error Parse At:" , src
for src in root.xpath('//input[@src]/@src'):
try :
host = urlparse(src).netloc
if '.' not in host : continue
if host.count('.') == 1 :
pass
else:
host = host[host.index('.')+1:]
urls.append('imgsrc_'+host)
except :
print "Error parseImgSrc At:" , src
except :
pass
return urls
Explanation: 利用 LXML Parser 來分析文章結構
End of explanation
image_list = all_content.map(lambda x :parseImgSrc(x[1]))
pp.pprint(image_list.first()[:10])
Explanation: 取出 Image Src 的列表
End of explanation
img_src_count = all_content.map(
lambda x :parseImgSrc(x[1])).flatMap(
lambda x: x).countByValue()
for i in img_src_count:
print i , ':' , img_src_count[i]
Explanation: 統計 Image Src 的列表
End of explanation
from operator import add
all_content.map(
lambda x :parseImgSrc(x[1])).flatMap(
lambda x: x).map(
lambda x: (x,1)).reduceByKey(add).sortBy(lambda x:x[1] ,ascending =False).collect()
Explanation: 請使用 reduceByKey , sortBy 來計算出 img src 排行榜
請參照以下文件
[http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD]
End of explanation |
15,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Python Syntax
In this exercise, you will work through some simple blocks of code so you learn the essentials of the Python language syntax.
For each of the code blocks below, read the code before running it. Try to imagine what it will do. Then run it, and check to see if you were right. If it did something other than what you expected, play with it a little bit. Can you make it do what you were expecting? Try changing it, and run it again, and see what happens.
Pressing Shift+Enter will run the currently highlighted code block.
Feel free to create a new, empty code block and use it as a place to experiment with code of your own.
Simple Expressions
Let's start off with some simple math.
Step1: Next, let's move on to strings.
Step2: Let's look at logical values
Step3: Next, let's look at some lists.
Step4: Variables
Variables will hold onto values you give them.
Step5: Making Decisions
Step6: Going Around in Circles | Python Code:
1 + 1
2 * 4
(2 * 4) - 2
4 ** 2 # Raise a number to a power
16 / 4
15 / 4
2.5 * 2.0
15.0 / 4
Explanation: Basic Python Syntax
In this exercise, you will work through some simple blocks of code so you learn the essentials of the Python language syntax.
For each of the code blocks below, read the code before running it. Try to imagine what it will do. Then run it, and check to see if you were right. If it did something other than what you expected, play with it a little bit. Can you make it do what you were expecting? Try changing it, and run it again, and see what happens.
Pressing Shift+Enter will run the currently highlighted code block.
Feel free to create a new, empty code block and use it as a place to experiment with code of your own.
Simple Expressions
Let's start off with some simple math.
End of explanation
'a' + 'b'
'Python ' + 'is' + ' fun'
"Python isn't afraid of single quotes"
'It is not "afraid" of double quotes either'
"The value is " + 17
"The value is " + str(17)
'The value is {0}'.format(17)
'Is {0} smaller than {1}?'.format(5.0, 12)
'Yes {1} is bigger than {0}.'.format(5.0, 12)
Explanation: Next, let's move on to strings.
End of explanation
True
True or False
True and False
1 < 2
'a' > 'z'
'a' = 'a'
'a' == 'a'
Explanation: Let's look at logical values
End of explanation
[1, 2, 3]
range(1, 7)
[1, 2] + [3]
['a', 'b', 'c'] + ['c', 'd', 'e']
1 in [1, 2, 3]
7 in [1, 2, 3]
len([1, 2, 3, 4, 5, 10, 20])
max([1, 5, 2, 100, 75, 3])
Explanation: Next, let's look at some lists.
End of explanation
a = 3
print a
a = 2
a
a = 7
a + 1
b = 2
a * b
c = 'Python'
c
d = c + ' is cool'
d
a = [1, 2, 3]
a.extend([8, 10, 12])
a
a[0]
a[1]
a[-1]
a[-2]
a[2:5]
Explanation: Variables
Variables will hold onto values you give them.
End of explanation
a = 5
b = 10
if a < b:
print "Smaller"
else:
print "Larger"
a = 500
b = 100
if a < b:
print "Smaller"
else:
print "Larger"
a = 10
b = 100
if a == b:
print "Same"
else:
print "Different"
a = 10
b = 100
if a != b:
print "Not equal"
else:
print "Same"
Explanation: Making Decisions
End of explanation
a = [1, 2, 3, 4, 5, 6]
for i in a:
print i
for i in range(0, 20, 2):
print i
a = 5
while a > 0:
print a
a = a - 1
Explanation: Going Around in Circles
End of explanation |
15,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Meta-Analysis in statsmodels
Statsmodels include basic methods for meta-analysis. This notebook illustrates the current usage.
Status
Step1: Example
Step2: estimate effect size standardized mean difference
Step3: Using one-step chi2, DerSimonian-Laird estimate for random effects variance tau
Method option for random effect method_re="chi2" or method_re="dl", both names are accepted.
This is commonly referred to as the DerSimonian-Laird method, it is based on a moment estimator based on pearson chi2 from the fixed effects estimate.
Step4: Using iterated, Paule-Mandel estimate for random effects variance tau
The method commonly referred to as Paule-Mandel estimate is a method of moment estimate for the random effects variance that iterates between mean and variance estimate until convergence.
Step5: Example Kacker interlaboratory mean
In this example the effect size is the mean of measurements in a lab. We combine the estimates from several labs to estimate and overall average.
Step7: Meta-analysis of proportions
In the following example the random effect variance tau is estimated to be zero.
I then change two counts in the data, so the second example has random effects variance greater than zero.
Step8: changing data to have positive random effects variance
Step9: Replicate fixed effect analysis using GLM with var_weights
combine_effects computes weighted average estimates which can be replicated using GLM with var_weights or with WLS.
The scale option in GLM.fit can be used to replicate fixed meta-analysis with fixed and with HKSJ/WLS scale
Step10: We need to fix scale=1 in order to replicate standard errors for the usual meta-analysis.
Step11: Using HKSJ variance adjustment in meta-analysis is equivalent to estimating the scale using pearson chi2, which is also the default for the gaussian family.
Step12: Mantel-Hanszel odds-ratio using contingency tables
The fixed effect for the log-odds-ratio using the Mantel-Hanszel can be directly computed using StratifiedTable.
We need to create a 2 x 2 x k contingency table to be used with StratifiedTable.
Step13: compare pooled log-odds-ratio and standard error to R meta package
Step14: check conversion to stratified contingency table
Row sums of each table are the sample sizes for treatment and control experiments
Step15: Results from R meta package
```
res_mb_hk = metabin(e2i, nei, c2i, nci, data=dat2, sm="OR", Q.Cochrane=FALSE, method="MH", method.tau="DL", hakn=FALSE, backtransf=FALSE)
res_mb_hk
logOR 95%-CI %W(fixed) %W(random)
1 2.7081 [ 0.5265; 4.8896] 0.3 0.7
2 1.2567 [ 0.2658; 2.2476] 2.1 3.2
3 0.3749 [-0.3911; 1.1410] 5.4 5.4
4 1.6582 [ 0.3245; 2.9920] 0.9 1.8
5 0.7850 [-0.0673; 1.6372] 3.5 4.4
6 0.3617 [-0.1528; 0.8762] 12.1 11.8
7 0.5754 [-0.3861; 1.5368] 3.0 3.4
8 0.2505 [-0.4881; 0.9892] 6.1 5.8
9 0.6506 [-0.3877; 1.6889] 2.5 3.0
10 0.0918 [-0.8067; 0.9903] 4.5 3.9
11 0.2739 [-0.1047; 0.6525] 23.1 21.4
12 0.4858 [ 0.0804; 0.8911] 18.6 18.8
13 0.1823 [-0.6830; 1.0476] 4.6 4.2
14 0.9808 [-0.4178; 2.3795] 1.3 1.6
15 1.3122 [-1.0055; 3.6299] 0.4 0.6
16 -0.2595 [-1.4450; 0.9260] 3.1 2.3
17 0.1384 [-0.5076; 0.7844] 8.5 7.6
Number of studies combined | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy import stats, optimize
from statsmodels.regression.linear_model import WLS
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.stats.meta_analysis import (
effectsize_smd, effectsize_2proportions, combine_effects,
_fit_tau_iterative, _fit_tau_mm, _fit_tau_iter_mm)
# increase line length for pandas
pd.set_option('display.width', 100)
Explanation: Meta-Analysis in statsmodels
Statsmodels include basic methods for meta-analysis. This notebook illustrates the current usage.
Status: The results have been verified against R meta and metafor packages. However, the API is still experimental and will still change. Some options for additional methods that are available in R meta and metafor are missing.
The support for meta-analysis has 3 parts:
effect size functions: this currently includes
effectsize_smd computes effect size and their standard errors for standardized mean difference,
effectsize_2proportions computes effect sizes for comparing two independent proportions using risk difference, (log) risk ratio, (log) odds-ratio or arcsine square root transformation
The combine_effects computes fixed and random effects estimate for the overall mean or effect. The returned results instance includes a forest plot function.
helper functions to estimate the random effect variance, tau-squared
The estimate of the overall effect size in combine_effects can also be performed using WLS or GLM with var_weights.
Finally, the meta-analysis functions currently do not include the Mantel-Hanszel method. However, the fixed effects results can be computed directly using StratifiedTable as illustrated below.
End of explanation
data = [
["Carroll", 94, 22,60,92, 20,60],
["Grant", 98, 21,65, 92,22, 65],
["Peck", 98, 28, 40,88 ,26, 40],
["Donat", 94,19, 200, 82,17, 200],
["Stewart", 98, 21,50, 88,22 , 45],
["Young", 96,21,85, 92 ,22, 85]]
colnames = ["study","mean_t","sd_t","n_t","mean_c","sd_c","n_c"]
rownames = [i[0] for i in data]
dframe1 = pd.DataFrame(data, columns=colnames)
rownames
mean2, sd2, nobs2, mean1, sd1, nobs1 = np.asarray(dframe1[["mean_t","sd_t","n_t","mean_c","sd_c","n_c"]]).T
rownames = dframe1["study"]
rownames.tolist()
np.array(nobs1 + nobs2)
Explanation: Example
End of explanation
eff, var_eff = effectsize_smd(mean2, sd2, nobs2, mean1, sd1, nobs1)
Explanation: estimate effect size standardized mean difference
End of explanation
res3 = combine_effects(eff, var_eff, method_re="chi2", use_t=True, row_names=rownames)
# TODO: we still need better information about conf_int of individual samples
# We don't have enough information in the model for individual confidence intervals
# if those are not based on normal distribution.
res3.conf_int_samples(nobs=np.array(nobs1 + nobs2))
print(res3.summary_frame())
res3.cache_ci
res3.method_re
fig = res3.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
res3 = combine_effects(eff, var_eff, method_re="chi2", use_t=False, row_names=rownames)
# TODO: we still need better information about conf_int of individual samples
# We don't have enough information in the model for individual confidence intervals
# if those are not based on normal distribution.
res3.conf_int_samples(nobs=np.array(nobs1 + nobs2))
print(res3.summary_frame())
Explanation: Using one-step chi2, DerSimonian-Laird estimate for random effects variance tau
Method option for random effect method_re="chi2" or method_re="dl", both names are accepted.
This is commonly referred to as the DerSimonian-Laird method, it is based on a moment estimator based on pearson chi2 from the fixed effects estimate.
End of explanation
res4 = combine_effects(eff, var_eff, method_re="iterated", use_t=False, row_names=rownames)
res4_df = res4.summary_frame()
print("method RE:", res4.method_re)
print(res4.summary_frame())
fig = res4.plot_forest()
Explanation: Using iterated, Paule-Mandel estimate for random effects variance tau
The method commonly referred to as Paule-Mandel estimate is a method of moment estimate for the random effects variance that iterates between mean and variance estimate until convergence.
End of explanation
eff = np.array([61.00, 61.40, 62.21, 62.30, 62.34, 62.60, 62.70,
62.84, 65.90])
var_eff = np.array([0.2025, 1.2100, 0.0900, 0.2025, 0.3844, 0.5625,
0.0676, 0.0225, 1.8225])
rownames = ['PTB', 'NMi', 'NIMC', 'KRISS', 'LGC', 'NRC', 'IRMM', 'NIST', 'LNE']
res2_DL = combine_effects(eff, var_eff, method_re="dl", use_t=True, row_names=rownames)
print("method RE:", res2_DL.method_re)
print(res2_DL.summary_frame())
fig = res2_DL.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
res2_PM = combine_effects(eff, var_eff, method_re="pm", use_t=True, row_names=rownames)
print("method RE:", res2_PM.method_re)
print(res2_PM.summary_frame())
fig = res2_PM.plot_forest()
fig.set_figheight(6)
fig.set_figwidth(6)
Explanation: Example Kacker interlaboratory mean
In this example the effect size is the mean of measurements in a lab. We combine the estimates from several labs to estimate and overall average.
End of explanation
import io
ss = \
study,nei,nci,e1i,c1i,e2i,c2i,e3i,c3i,e4i,c4i
1,19,22,16.0,20.0,11,12,4.0,8.0,4,3
2,34,35,22.0,22.0,18,12,15.0,8.0,15,6
3,72,68,44.0,40.0,21,15,10.0,3.0,3,0
4,22,20,19.0,12.0,14,5,5.0,4.0,2,3
5,70,32,62.0,27.0,42,13,26.0,6.0,15,5
6,183,94,130.0,65.0,80,33,47.0,14.0,30,11
7,26,50,24.0,30.0,13,18,5.0,10.0,3,9
8,61,55,51.0,44.0,37,30,19.0,19.0,11,15
9,36,25,30.0,17.0,23,12,13.0,4.0,10,4
10,45,35,43.0,35.0,19,14,8.0,4.0,6,0
11,246,208,169.0,139.0,106,76,67.0,42.0,51,35
12,386,141,279.0,97.0,170,46,97.0,21.0,73,8
13,59,32,56.0,30.0,34,17,21.0,9.0,20,7
14,45,15,42.0,10.0,18,3,9.0,1.0,9,1
15,14,18,14.0,18.0,13,14,12.0,13.0,9,12
16,26,19,21.0,15.0,12,10,6.0,4.0,5,1
17,74,75,,,42,40,,,23,30
df3 = pd.read_csv(io.StringIO(ss))
df_12y = df3[["e2i", "nei", "c2i", "nci"]]
# TODO: currently 1 is reference, switch labels
count1, nobs1, count2, nobs2 = df_12y.values.T
dta = df_12y.values.T
eff, var_eff = effectsize_2proportions(*dta, statistic="rd")
eff, var_eff
res5 = combine_effects(eff, var_eff, method_re="iterated", use_t=False)#, row_names=rownames)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print("RE variance tau2:", res5.tau2)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
Explanation: Meta-analysis of proportions
In the following example the random effect variance tau is estimated to be zero.
I then change two counts in the data, so the second example has random effects variance greater than zero.
End of explanation
dta_c = dta.copy()
dta_c.T[0, 0] = 18
dta_c.T[1, 0] = 22
dta_c.T
eff, var_eff = effectsize_2proportions(*dta_c, statistic="rd")
res5 = combine_effects(eff, var_eff, method_re="iterated", use_t=False)#, row_names=rownames)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
res5 = combine_effects(eff, var_eff, method_re="chi2", use_t=False)
res5_df = res5.summary_frame()
print("method RE:", res5.method_re)
print(res5.summary_frame())
fig = res5.plot_forest()
fig.set_figheight(8)
fig.set_figwidth(6)
Explanation: changing data to have positive random effects variance
End of explanation
from statsmodels.genmod.generalized_linear_model import GLM
eff, var_eff = effectsize_2proportions(*dta_c, statistic="or")
res = combine_effects(eff, var_eff, method_re="chi2", use_t=False)
res_frame = res.summary_frame()
print(res_frame.iloc[-4:])
Explanation: Replicate fixed effect analysis using GLM with var_weights
combine_effects computes weighted average estimates which can be replicated using GLM with var_weights or with WLS.
The scale option in GLM.fit can be used to replicate fixed meta-analysis with fixed and with HKSJ/WLS scale
End of explanation
weights = 1 / var_eff
mod_glm = GLM(eff, np.ones(len(eff)),
var_weights=weights)
res_glm = mod_glm.fit(scale=1.)
print(res_glm.summary().tables[1])
# check results
res_glm.scale, res_glm.conf_int() - res_frame.loc["fixed effect", ["ci_low", "ci_upp"]].values
Explanation: We need to fix scale=1 in order to replicate standard errors for the usual meta-analysis.
End of explanation
res_glm = mod_glm.fit(scale="x2")
print(res_glm.summary().tables[1])
# check results
res_glm.scale, res_glm.conf_int() - res_frame.loc["fixed effect", ["ci_low", "ci_upp"]].values
Explanation: Using HKSJ variance adjustment in meta-analysis is equivalent to estimating the scale using pearson chi2, which is also the default for the gaussian family.
End of explanation
t, nt, c, nc = dta_c
counts = np.column_stack([t, nt - t, c, nc - c])
ctables = counts.T.reshape(2, 2, -1)
ctables[:, :, 0]
counts[0]
dta_c.T[0]
import statsmodels.stats.api as smstats
st = smstats.StratifiedTable(ctables.astype(np.float64))
Explanation: Mantel-Hanszel odds-ratio using contingency tables
The fixed effect for the log-odds-ratio using the Mantel-Hanszel can be directly computed using StratifiedTable.
We need to create a 2 x 2 x k contingency table to be used with StratifiedTable.
End of explanation
st.logodds_pooled, st.logodds_pooled - 0.4428186730553189 # R meta
st.logodds_pooled_se, st.logodds_pooled_se - 0.08928560091027186 # R meta
st.logodds_pooled_confint()
print(st.test_equal_odds())
print(st.test_null_odds())
Explanation: compare pooled log-odds-ratio and standard error to R meta package
End of explanation
ctables.sum(1)
nt, nc
Explanation: check conversion to stratified contingency table
Row sums of each table are the sample sizes for treatment and control experiments
End of explanation
print(st.summary())
Explanation: Results from R meta package
```
res_mb_hk = metabin(e2i, nei, c2i, nci, data=dat2, sm="OR", Q.Cochrane=FALSE, method="MH", method.tau="DL", hakn=FALSE, backtransf=FALSE)
res_mb_hk
logOR 95%-CI %W(fixed) %W(random)
1 2.7081 [ 0.5265; 4.8896] 0.3 0.7
2 1.2567 [ 0.2658; 2.2476] 2.1 3.2
3 0.3749 [-0.3911; 1.1410] 5.4 5.4
4 1.6582 [ 0.3245; 2.9920] 0.9 1.8
5 0.7850 [-0.0673; 1.6372] 3.5 4.4
6 0.3617 [-0.1528; 0.8762] 12.1 11.8
7 0.5754 [-0.3861; 1.5368] 3.0 3.4
8 0.2505 [-0.4881; 0.9892] 6.1 5.8
9 0.6506 [-0.3877; 1.6889] 2.5 3.0
10 0.0918 [-0.8067; 0.9903] 4.5 3.9
11 0.2739 [-0.1047; 0.6525] 23.1 21.4
12 0.4858 [ 0.0804; 0.8911] 18.6 18.8
13 0.1823 [-0.6830; 1.0476] 4.6 4.2
14 0.9808 [-0.4178; 2.3795] 1.3 1.6
15 1.3122 [-1.0055; 3.6299] 0.4 0.6
16 -0.2595 [-1.4450; 0.9260] 3.1 2.3
17 0.1384 [-0.5076; 0.7844] 8.5 7.6
Number of studies combined: k = 17
logOR 95%-CI z p-value
Fixed effect model 0.4428 [0.2678; 0.6178] 4.96 < 0.0001
Random effects model 0.4295 [0.2504; 0.6086] 4.70 < 0.0001
Quantifying heterogeneity:
tau^2 = 0.0017 [0.0000; 0.4589]; tau = 0.0410 [0.0000; 0.6774];
I^2 = 1.1% [0.0%; 51.6%]; H = 1.01 [1.00; 1.44]
Test of heterogeneity:
Q d.f. p-value
16.18 16 0.4404
Details on meta-analytical method:
- Mantel-Haenszel method
- DerSimonian-Laird estimator for tau^2
- Jackson method for confidence interval of tau^2 and tau
res_mb_hk$TE.fixed
[1] 0.4428186730553189
res_mb_hk$seTE.fixed
[1] 0.08928560091027186
c(res_mb_hk$lower.fixed, res_mb_hk$upper.fixed)
[1] 0.2678221109331694 0.6178152351774684
```
End of explanation |
15,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Concrete Slump Test - UCI
Analysis of the <a href="https
Step1: Univariate Analysis
Step2: Correlation With the Target Columns
Step3: Correlation between Features
Step4: Bivariate Analysis
Scatterplot and regression for each {feature, target} pair.
Step5: Apply Support Vector Regression | Python Code:
import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
import seaborn as sns
data = pd.read_csv('concrete_slump.csv')
data = data.drop('No', axis=1)
data.head()
Explanation: Concrete Slump Test - UCI
Analysis of the <a href="https://archive.ics.uci.edu/ml/datasets/Concrete+Slump+Test">concrete slump test dataset from UCI.</a>
End of explanation
_, axes = pylab.subplots(len(data.columns), 1, figsize=(5, 20))
for i, fname in enumerate(data.columns):
data.loc[:, fname].plot(kind='hist', title=fname, ax=axes[i])
pylab.tight_layout()
Explanation: Univariate Analysis
End of explanation
target_df = data.loc[:, data.columns[-3:]]
target_df.head()
feature_df = data.loc[:, data.columns.difference(target_df.columns)]
feature_df.head()
corrs = target_df.apply(lambda t: feature_df.corrwith(t))
corrs
corrs.plot(kind='bar', subplots=True, rot='30')
Explanation: Correlation With the Target Columns
End of explanation
f_corrs = feature_df.corr()
sns.heatmap(f_corrs, annot=True)
Explanation: Correlation between Features
End of explanation
_, axes = pylab.subplots(len(feature_df.columns), len(target_df.columns), figsize=(20, 30))
for i, fname in enumerate(feature_df.columns):
for j, tname in enumerate(target_df.columns):
sns.regplot(x=fname, y=tname, data=data, ax=axes[i][j])
Explanation: Bivariate Analysis
Scatterplot and regression for each {feature, target} pair.
End of explanation
from sklearn.svm import SVR
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
model = SVR(kernel='rbf', C=100, gamma=0.1)
preprocessor = StandardScaler()
estimator = make_pipeline(preprocessor, model)
scores = target_df.apply(lambda t:
pd.Series(data=cross_val_score(estimator=estimator, X=feature_df, y=t, cv=5),
name=t.name))
scores
scores.plot(kind='bar', subplots=True)
Explanation: Apply Support Vector Regression
End of explanation |
15,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5. Additional Statistics Functions
Step1: Bootstrap Comparisons
Step2: TOST Equivalence Tests | Python Code:
# Import numpy and set random number generator
import numpy as np
np.random.seed(10)
# Import stats functions
from pymer4.stats import perm_test
# Generate two samples of data: X (M~2, SD~10, N=100) and Y (M~2.5, SD~1, N=100)
x = np.random.normal(loc=2, size=100)
y = np.random.normal(loc=2.5, size=100)
# Between groups t-test. The first value is the t-stat and the
# second is the permuted p-value
result = perm_test(x, y, stat="tstat", n_perm=500, n_jobs=1)
print(result)
# Spearman rank correlation. The first values is spearman's rho
# and the second is the permuted p-value
result = perm_test(x, y, stat="spearmanr", n_perm=500, n_jobs=1)
print(result)
Explanation: 5. Additional Statistics Functions
:code:pymer4 also comes with some flexible routines for various statistical operations such as permutation testing, bootstrapping of arbitrary functions and equivalence testing. Here are a few examples:
Permutation Tests
:code:pymer4 can compute a wide variety of one and two-sample permutation tests including mean differences, t-statistics, effect size comparisons, and correlations
End of explanation
# Import stats function
from pymer4.stats import boot_func
# Define a simple function for a median difference test
def med_diff(x, y):
return np.median(x) - np.median(y)
# Between groups median test with resampling
# The first value is the median difference and the
# second is the lower and upper 95% confidence interval
result = boot_func(x, y, func=med_diff)
print(result)
Explanation: Bootstrap Comparisons
:code:pymer4 can compute a bootstrap comparison using any arbitrary function that takes as input either one or two 1d numpy arrays, and returns a single value.
End of explanation
# Import stats function
from pymer4.stats import tost_equivalence
# Generate some data
lower, upper = -0.1, 0.1
x, y = np.random.normal(0.145, 0.025, 35), np.random.normal(0.16, 0.05, 17)
result = tost_equivalence(x, y, lower, upper, plot=True)
# Print the results dictionary nicely
for k, v in result.items():
print(f"{k}: {v}\n")
Explanation: TOST Equivalence Tests
:code:pymer4 also has experimental support for two-one-sided equivalence tests <https://bit.ly/33wsB5i/>_.
End of explanation |
15,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multivariable regression
Imports and setup
Step1: Load Data
We are going to use daily gas, electricity and water consumption data and weather data. Because we don't want to overload the weather API, we will only use 1 location (Ukkel).
First, let's define the start and end date of the identification data. That is the data to be used to find the model. Later, we will use the model to predict.
Step2: Energy data
We for each consumption type (electricity, gas and water), we create a daily dataframe and save it in the dictionary dfs. The data is obtained from the daily caches.
Step3: Weather and other exogenous data
Run this block to download the weather data and save it to a pickle. This is a large request, and you can only do 2 or 3 of these per day before your credit with Forecast.io runs out!
We also add a column for each day-of-week which may be used by the regression algorithm on a daily basis.
Step4: Put data together
The generator below will return a dataframe with sensor id as first column and all exogenous data as other columns.
Step5: Let's have a peek
Step6: Run Regression Analysis
We run the analysis on monthly and weekly basis. | Python Code:
import os
import pandas as pd
from opengrid.library import houseprint, caching, regression
from opengrid import config
c = config.Config()
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
plt.rcParams['figure.figsize'] = 16,8
# Create houseprint from saved file, if not available, parse the google spreadsheet
try:
hp_filename = os.path.join(c.get('data', 'folder'), 'hp_anonymous.pkl')
hp = houseprint.load_houseprint_from_file(hp_filename)
print("Houseprint loaded from {}".format(hp_filename))
except Exception as e:
print(e)
print("Because of this error we try to build the houseprint from source")
hp = houseprint.Houseprint()
hp.init_tmpo()
Explanation: Multivariable regression
Imports and setup
End of explanation
start = pd.Timestamp('2015-01-01', tz='Europe/Brussels')
end = pd.Timestamp('now', tz='Europe/Brussels')
end_model = pd.Timestamp('2016-12-31', tz='Europe/Brussels') #last day of the data period for the model
Explanation: Load Data
We are going to use daily gas, electricity and water consumption data and weather data. Because we don't want to overload the weather API, we will only use 1 location (Ukkel).
First, let's define the start and end date of the identification data. That is the data to be used to find the model. Later, we will use the model to predict.
End of explanation
caches = {}
dfs = {}
for cons in ['electricity', 'gas', 'water']:
caches[cons] = caching.Cache(variable='{}_daily_total'.format(cons))
dfs[cons] = caches[cons].get(sensors = hp.get_sensors(sensortype=cons))
Explanation: Energy data
We for each consumption type (electricity, gas and water), we create a daily dataframe and save it in the dictionary dfs. The data is obtained from the daily caches.
End of explanation
from opengrid.library import forecastwrapper
weather = forecastwrapper.Weather(location=(50.8024, 4.3407), start=start, end=end - pd.Timedelta(days=1))
irradiances=[
(0, 90), # north vertical
(90, 90), # east vertical
(180, 90), # south vertical
(270, 90), # west vertical
]
orientations = [0, 90, 180, 270]
weather_data = weather.days(irradiances=irradiances,
wind_orients=orientations,
heating_base_temperatures=[0, 6, 8 ,10, 12, 14, 16, 18]).dropna(axis=1)
weather_data.drop(['icon', 'summary', 'moonPhase', 'windBearing', 'temperatureMaxTime', 'temperatureMinTime',
'apparentTemperatureMaxTime', 'apparentTemperatureMinTime', 'uvIndexTime',
'sunsetTime', 'sunriseTime'],
axis=1, inplace=True)
# Add columns for the day-of-week
for i, d in zip(range(7), ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']):
weather_data[d] = 0
weather_data.loc[weather_data.index.weekday == i, d] = 1
weather_data = weather_data.applymap(float)
weather_data.head()
weather_data.columns
Explanation: Weather and other exogenous data
Run this block to download the weather data and save it to a pickle. This is a large request, and you can only do 2 or 3 of these per day before your credit with Forecast.io runs out!
We also add a column for each day-of-week which may be used by the regression algorithm on a daily basis.
End of explanation
def data_generator(consumable):
dfcons = dfs[consumable]
for sensorid in dfcons.columns:
df = pd.concat([dfcons[sensorid], weather_data], axis=1).dropna()
df = df.tz_convert('Europe/Brussels')
yield sensorid, df
Explanation: Put data together
The generator below will return a dataframe with sensor id as first column and all exogenous data as other columns.
End of explanation
cons = 'gas'
analysis_data = data_generator(cons)
sensorid, peek = next(analysis_data)
peek = peek.resample(rule='MS').sum()
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot_date(peek.index, peek[sensorid], '-', color='grey', lw=8, label=cons)
for column in peek.columns[1:]:
if 'heatingDegreeDays' in column:
ax2.plot_date(peek.index, peek[column], '-', label=column)
plt.legend()
Explanation: Let's have a peek
End of explanation
cons = 'water'
save_figures = True
analysis_data = data_generator(cons)
mrs_month = []
mrs_month_cv = []
mrs_week = []
for sensorid, data in analysis_data:
data.rename(columns={sensorid:cons}, inplace=True)
df = data.resample(rule='MS').sum()
if len(df) < 2:
continue
# monthly model, statistical validation
mrs_month.append(regression.MVLinReg(df.ix[:end_model], cons, p_max=0.03))
figures = mrs_month[-1].plot(df=df)
if save_figures:
figures[0].savefig(os.path.join(c.get('data', 'folder'), 'figures', 'multivar_model_'+sensorid+'.png'), dpi=100)
figures[1].savefig(os.path.join(c.get('data', 'folder'), 'figures', 'multivar_results_'+sensorid+'.png'), dpi=100)
# weekly model, statistical validation
df = data.resample(rule='W').sum()
if len(df.ix[:end_model]) < 4:
continue
mrs_week.append(regression.MVLinReg(df.ix[:end_model], cons, p_max=0.02))
if len(df.ix[end_model:]) > 0:
figures = mrs_week[-1].plot(model=False, bar_chart=True, df=df.ix[end_model:])
if save_figures:
figures[0].savefig(os.path.join(c.get('data', 'folder'), 'figures', 'multivar_prediction_weekly_'+sensorid+'.png'), dpi=100)
Explanation: Run Regression Analysis
We run the analysis on monthly and weekly basis.
End of explanation |
15,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Some Data
Step2: Create An Operation To Execute On The Data
Step3: Traditional Approach
Step4: Parallelism Approach | Python Code:
from multiprocessing import Pool
from multiprocessing.dummy import Pool as ThreadPool
Explanation: Title: Parallel Processing
Slug: parallel_processing
Summary: Lightweight Parallel Processing in Python.
Date: 2016-01-23 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
This tutorial is inspired by Chris Kiehl's great post on multiprocessing.
Preliminaries
End of explanation
# Create a list of some data
data = range(29999)
Explanation: Create Some Data
End of explanation
# Create a function that takes a data point
def some_function(datum):
# and returns the datum raised to the power of itself
return datum**datum
Explanation: Create An Operation To Execute On The Data
End of explanation
%%time
# Create an empty for the results
results = []
# For each value in the data
for datum in data:
# Append the output of the function when applied to that datum
results.append(some_function(datum))
Explanation: Traditional Approach
End of explanation
# Create a pool of workers equaling cores on the machine
pool = ThreadPool()
%%time
# Apply (map) some_function to the data using the pool of workers
results = pool.map(some_function, data)
# Close the pool
pool.close()
# Combine the results of the workers
pool.join()
Explanation: Parallelism Approach
End of explanation |
15,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Extract curves into striplogs
Sometimes you'd like to summarize or otherwise extract curve data (e.g. wireline log data) into a striplog (e.g. one that represents formations).
We'll start by making some fake CSV text — we'll make 5 formations called A, B, C, D and E
Step2: If you have a CSV file, you can do
Step3: Each element of the striplog is an Interval object, which has a top, base and one or more Components, which represent whatever is in the interval (maybe a rock type, or in this case a formation). There is also a data field, which we will use later.
Step4: We can plot the striplog. By default, it will use a random legend for the colours
Step5: Or we can plot in the 'tops' style
Step6: Random curve data
Make some fake data
Step7: Plot it
Step8: Extract data from the curve into the striplog
Step9: Now we have some the GR data from each unit stored in that unit
Step10: So we could plot a segment of curve, say
Step11: Extract and reduce data
We don't have to store all the data points. We can optionaly pass a function to produce anything we like, and store the result of that
Step12: Other helpful reducing functions | Python Code:
data = Comp Formation,Depth
A,100
B,200
C,250
D,400
E,600
Explanation: Extract curves into striplogs
Sometimes you'd like to summarize or otherwise extract curve data (e.g. wireline log data) into a striplog (e.g. one that represents formations).
We'll start by making some fake CSV text — we'll make 5 formations called A, B, C, D and E:
End of explanation
from striplog import Striplog
s = Striplog.from_csv(text=data, stop=650)
Explanation: If you have a CSV file, you can do:
s = Striplog.from_csv(filename=filename)
But we have text, so we do something slightly different, passing the text argument instead. We also pass a stop argument to tell Striplog to make the last unit (E) 50 m thick. (If you don't do this, it will be 1 m thick).
End of explanation
s[0]
Explanation: Each element of the striplog is an Interval object, which has a top, base and one or more Components, which represent whatever is in the interval (maybe a rock type, or in this case a formation). There is also a data field, which we will use later.
End of explanation
s.plot(aspect=3)
Explanation: We can plot the striplog. By default, it will use a random legend for the colours:
End of explanation
s.plot(style='tops', field='formation', aspect=1)
Explanation: Or we can plot in the 'tops' style:
End of explanation
from welly import Curve
import numpy as np
depth = np.linspace(0, 699, 700)
data = np.sin(depth/10)
curve = Curve(data=data, index=depth)
Explanation: Random curve data
Make some fake data:
End of explanation
import matplotlib.pyplot as plt
fig, axs = plt.subplots(ncols=2, sharey=True)
axs[0] = s.plot(ax=axs[0])
axs[1] = curve.plot(ax=axs[1])
Explanation: Plot it:
End of explanation
s = s.extract(curve.values, basis=depth, name='GR')
Explanation: Extract data from the curve into the striplog
End of explanation
s[1]
Explanation: Now we have some the GR data from each unit stored in that unit:
End of explanation
plt.plot(s[1].data['GR'])
Explanation: So we could plot a segment of curve, say:
End of explanation
s = s.extract(curve, basis=depth, name='GRmean', function=np.nanmean)
s[1]
Explanation: Extract and reduce data
We don't have to store all the data points. We can optionaly pass a function to produce anything we like, and store the result of that:
End of explanation
s[1].data['foo'] = 'bar'
s[1]
Explanation: Other helpful reducing functions:
np.nanmedian — median average (ignoring nans)
np.product — product
np.nansum — sum (ignoring nans)
np.nanmin — minimum (ignoring nans)
np.nanmax — maximum (ignoring nans)
scipy.stats.mstats.mode — mode average
scipy.stats.mstats.hmean — harmonic mean
scipy.stats.mstats.gmean — geometric mean
Or you can write your own, for example:
def trim_mean(a):
Compute trimmed mean, trimming min and max
return (np.nansum(a) - np.nanmin(a) - np.nanmax(a)) / a.size
Then do:
s.extract(curve, basis=basis, name='GRtrim', function=trim_mean)
The function doesn't have to return a single number like this, it could return anything you like, including a dictionary.
We can also add bits to the data dictionary manually:
End of explanation |
15,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Machine Learning
LA Team Submission 5 ##
Lukas Mosser, Alfredo De la Fuente
In this approach for solving the facies classfication problem ( https
Step1: Data Preprocessing
Step2: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
Step3: Data Analysis
In this section we will run a Cross Validation routine
Step4: Prediction | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install tpot
from __future__ import print_function
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import LeavePGroupsOut
from sklearn.multiclass import OneVsOneClassifier
from sklearn.ensemble import RandomForestClassifier
from scipy.signal import medfilt
Explanation: Facies classification using Machine Learning
LA Team Submission 5 ##
Lukas Mosser, Alfredo De la Fuente
In this approach for solving the facies classfication problem ( https://github.com/seg/2016-ml-contest. ) we will explore the following statregies:
- Features Exploration: based on Paolo Bestagini's work, we will consider imputation, normalization and augmentation routines for the initial features.
- Model tuning:
Libraries
We will need to install the following libraries and packages.
End of explanation
#Load Data
data = pd.read_csv('../facies_vectors.csv')
# Parameters
feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
# Store features and labels
X = data[feature_names].values
y = data['Facies'].values
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values
# Fill 'PE' missing values with mean
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(X)
X = imp.transform(X)
Explanation: Data Preprocessing
End of explanation
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
X_aug, padded_rows = augment_features(X, well, depth)
# Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):
split_list.append({'train':train, 'val':val})
def preprocess():
# Preprocess data to use in model
X_train_aux = []
X_test_aux = []
y_train_aux = []
y_test_aux = []
# For each data split
split = split_list[5]
# Remove padded rows
split_train_no_pad = np.setdiff1d(split['train'], padded_rows)
# Select training and validation data from current split
X_tr = X_aug[split_train_no_pad, :]
X_v = X_aug[split['val'], :]
y_tr = y[split_train_no_pad]
y_v = y[split['val']]
# Select well labels for validation data
well_v = well[split['val']]
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
X_train_aux.append( X_tr )
X_test_aux.append( X_v )
y_train_aux.append( y_tr )
y_test_aux.append ( y_v )
X_train = np.concatenate( X_train_aux )
X_test = np.concatenate ( X_test_aux )
y_train = np.concatenate ( y_train_aux )
y_test = np.concatenate ( y_test_aux )
return X_train , X_test , y_train , y_test
Explanation: We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.
End of explanation
from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = preprocess()
tpot = TPOTClassifier(generations=5, population_size=20,
verbosity=2,max_eval_time_mins=20,
max_time_mins=100,scoring='f1_micro',
random_state = 17)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('FinalPipeline.py')
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
#clf = make_pipeline(make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="gini", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))
#clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights="distance") )
#clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion="entropy", max_features=0.0001, n_estimators=500))
# * clf = make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))
clf = make_pipeline ( XGBClassifier(learning_rate=0.12, max_depth=3, min_child_weight=10, n_estimators=150, seed = 17, colsample_bytree = 0.9) )
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat
Explanation: Data Analysis
In this section we will run a Cross Validation routine
End of explanation
#Load testing data
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare training data
X_tr = X
y_tr = y
# Augment features
X_tr, padded_rows = augment_features(X_tr, well, depth)
# Removed padded rows
X_tr = np.delete(X_tr, padded_rows, axis=0)
y_tr = np.delete(y_tr, padded_rows, axis=0)
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
# Augment features
X_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)
# Predict test labels
y_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)
# Save predicted labels
test_data['Facies'] = y_ts_hat
test_data.to_csv('Prediction_XX_Final.csv')
Explanation: Prediction
End of explanation |
15,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EDA
Step1: Counting Named Entities
Here is my count_entities function. The idea is to count the total mentions of a person or a place in an article's body or title and save them as columns in my existing data structure.
Step2: These graphs indicate that person and place counts from article are both strongly right skewed. It might be more interesting to compare mean person and place counts among different sections.
Step3: From this pivot table, it seems there are a few distinctions to be made between different sections. Entertainment and sports contain more person mentions on average than any other sections, and world news contains more places in the title than other sections.
Finding Common Named Entities
Now, I'll try to see which people are places get the most mentions in each section. I've written an evaluate_entities function that creates a dictionary of counts for each unique person or place in a particular section or for a particular source.
Step4: Commonly Mentioned People in World News and Entertainment
Step5: Perhaps as expected, Trump is the most commonly mentioned person in world news, with 1,237 mentions in 467 articles, with Obama and Putin coming in second and third. It's interesting to note that most of these names are political figures, but since the tagger only receives unigrams, partial names and first names are mentioned as well.
Step6: Now, I'll compare the top 20 people mentioned in entertainment articles. Trump still takes the number one spot, but interestingly, he's followed by a string of first names. NLTK provides a corpus of male and female-tagged first names, so counting the number of informal mentions or even the ratio of men to women might be a useful feature for classifying articles.
Commonly Mentioned Places in World News and Entertainment
Compared to those from the world news section, the locations in the entertainment section are mostly in the United States | Python Code:
import articledata
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import operator
data = pd.read_pickle('/Users/teresaborcuch/capstone_project/notebooks/pickled_data.pkl')
Explanation: EDA: Named Entity Recognition
Named entity recognition is the process of identifing particular elements from text, such as names, places, quantities, percentages, times/dates, etc. Identifying and quantifying what the general content types an article contains seems like a good predictor of what type of article it is. World news articles, for example, might mention more places than opinion articles, and business articles might have more percentages or dates than other sections. For each article, I'll count how many total mentions of people or places there are in the titles, as well as how many unique mentions for article bodies.
The Stanford NLP group has published three Named-Entity Recognizers. The three class model recognizes locations, persons, and organizations, and at least for now, this is the one I'll be using. Although NER's are written in Java, there is the Pyner interface for Python, as well as an NLTK wrapper (which I'll be using).
Although state-of-the-art taggers can achieve near-human levels of accuracy, this one does make a few mistakes. One obvious flaw is that if I feed the tagger unigram terms, two-part names such as "Michael Jordan" will count as ("Michael", "PERSON") and ("Jordan", "PERSON"). I can roughly correct for this by dividing my average name entity count by two if need be. Additionally, sometimes the tagger mis-tags certain people or places. For instance, it failed to recognize "Cameroon" as a location, but tagged the word "Heartbreak" in the article title "A Personal Trainer for Heartbreak" as a person. That being said, let's see what it can do on my news data.
End of explanation
def count_entities(data = None, title = True):
# set up tagger
os.environ['CLASSPATH'] = "/Users/teresaborcuch/stanford-ner-2013-11-12/stanford-ner.jar"
os.environ['STANFORD_MODELS'] = '/Users/teresaborcuch/stanford-ner-2013-11-12/classifiers'
st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz')
tagged_titles = []
persons = []
places = []
if title:
for x in data['title']:
tokens = word_tokenize(x)
tags = st.tag(tokens)
tagged_titles.append(tags)
for pair_list in tagged_titles:
person_count = 0
place_count = 0
for pair in pair_list:
if pair[1] == 'PERSON':
person_count +=1
elif pair[1] == 'LOCATION':
place_count +=1
else:
continue
persons.append(person_count)
places.append(place_count)
data['total_persons_title'] = persons
data['total_places_title'] = places
else:
for x in data['body']:
tokens = word_tokenize(x)
tags = st.tag(tokens)
tagged_titles.append(tags)
for pair_list in tagged_titles:
person_count = 0
place_count = 0
for pair in pair_list:
if pair[1] == 'PERSON':
person_count +=1
elif pair[1] == 'LOCATION':
place_count +=1
else:
continue
persons.append(person_count)
places.append(place_count)
data['total_persons_body'] = persons
data['total_places_body'] = places
return data
# Count people and places in article titles and save as new columns
# Warning - this is super slow!
data = articledata.count_entities(data = data, title = True)
data.head(1)
# pickle the file to avoid having to re-run this for future analyses
data.to_pickle('/Users/teresaborcuch/capstone_project/notebooks/ss_entity_data.pkl')
sns.set_style("whitegrid", {'axes.grid' : False})
fig = plt.figure(figsize = (12, 5))
ax1 = fig.add_subplot(1,2,1)
ax1.hist(data['total_persons_title'])
ax1.set_xlabel("Total Person Count in Article Titles ")
ax1.set_ylim(0,2500)
ax1.set_xlim(0,6)
ax2 = fig.add_subplot(1,2,2)
ax2.hist(data['total_places_title'])
ax2.set_xlabel("Total Place Count in Article Titles")
ax2.set_ylim(0, 2500)
ax2.set_xlim(0,6)
plt.show()
Explanation: Counting Named Entities
Here is my count_entities function. The idea is to count the total mentions of a person or a place in an article's body or title and save them as columns in my existing data structure.
End of explanation
data.pivot_table(
index = ['condensed_section'],
values = ['total_persons_title', 'total_places_title']).sort_values('total_persons_title', ascending = False)
Explanation: These graphs indicate that person and place counts from article are both strongly right skewed. It might be more interesting to compare mean person and place counts among different sections.
End of explanation
def evaluate_entities(data = None, section = None, source = None):
section_mask = (data['condensed_section'] == section)
source_mask = (data['source'] == source)
if section and source:
masked_data = data[section_mask & source_mask]
elif section:
masked_data = data[section_mask]
elif source:
masked_data = data[source_mask]
else:
masked_data = data
# set up tagger
os.environ['CLASSPATH'] = "/Users/teresaborcuch/stanford-ner-2013-11-12/stanford-ner.jar"
os.environ['STANFORD_MODELS'] = '/Users/teresaborcuch/stanford-ner-2013-11-12/classifiers'
st = StanfordNERTagger('english.all.3class.distsim.crf.ser.gz')
# dictionaries to hold counts of entities
person_dict = {}
place_dict = {}
for x in masked_data['body']:
tokens = word_tokenize(x)
tags = st.tag(tokens)
for pair in tags:
if pair[1] == 'PERSON':
if pair[0] not in person_dict.keys():
person_dict[pair[0]] = 1
else:
person_dict[pair[0]] +=1
elif pair[1] == 'LOCATION':
if pair[0] not in place_dict.keys():
place_dict[pair[0]] = 1
else:
place_dict[pair[0]] += 1
return person_dict, place_dict
Explanation: From this pivot table, it seems there are a few distinctions to be made between different sections. Entertainment and sports contain more person mentions on average than any other sections, and world news contains more places in the title than other sections.
Finding Common Named Entities
Now, I'll try to see which people are places get the most mentions in each section. I've written an evaluate_entities function that creates a dictionary of counts for each unique person or place in a particular section or for a particular source.
End of explanation
world_persons, world_places = articledata.evaluate_entities(data = data, section = 'world', source = None)
# get top 20 people from world news article bodies
sorted_wp = sorted(world_persons.items(), key=operator.itemgetter(1))
sorted_wp.reverse()
sorted_wp[:20]
Explanation: Commonly Mentioned People in World News and Entertainment
End of explanation
entertainment_persons, entertainment_places = articledata.evaluate_entities(data = data, section = 'entertainment', source = None)
sorted_ep = sorted(entertainment_persons.items(), key=operator.itemgetter(1))
sorted_ep.reverse()
sorted_ep[:20]
Explanation: Perhaps as expected, Trump is the most commonly mentioned person in world news, with 1,237 mentions in 467 articles, with Obama and Putin coming in second and third. It's interesting to note that most of these names are political figures, but since the tagger only receives unigrams, partial names and first names are mentioned as well.
End of explanation
# get top 20 places from world news article bodies
sorted_wp = sorted(world_places.items(), key=operator.itemgetter(1))
sorted_wp.reverse()
sorted_wp[:20]
# get top 20 places from entertainment article bodies
sorted_ep = sorted(entertainment_places.items(), key=operator.itemgetter(1))
sorted_ep.reverse()
sorted_ep[:20]
Explanation: Now, I'll compare the top 20 people mentioned in entertainment articles. Trump still takes the number one spot, but interestingly, he's followed by a string of first names. NLTK provides a corpus of male and female-tagged first names, so counting the number of informal mentions or even the ratio of men to women might be a useful feature for classifying articles.
Commonly Mentioned Places in World News and Entertainment
Compared to those from the world news section, the locations in the entertainment section are mostly in the United States: New York City (pieced together from "New", "York", and "City") seems to be the most common, but Los Angeles, Manhattan, and Chicago also appear. There are a few international destinations (fashionable ones like London and Paris and their respective countries), but nowhere near as many as in the world news section, where, after the U.S, Iran, China, and Russia take the top spots.
End of explanation |
15,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Comparison of the penny models from Chapters 1, 20, and 21
Copyright 2018 Allen Downey
License
Step1: With air resistance
Next we'll add air resistance using the drag equation
I'll start by getting the units we'll need from Pint.
Step2: Now I'll create a Params object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
Step4: Now we can pass the Params object make_system which computes some additional parameters and defines init.
make_system uses the given radius to compute area and the given v_term to compute the drag coefficient C_d.
Step5: Let's make a System
Step7: Here's the slope function, including acceleration due to gravity and drag.
Step8: As always, let's test the slope function with the initial conditions.
Step10: We can use the same event function as in the previous chapter.
Step11: And then run the simulation.
Step12: Here are the results.
Step13: The final height is close to 0, as expected.
Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.
We can get the flight time from results.
Step14: Here's the plot of position as a function of time.
Step15: And velocity as a function of time
Step16: From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant.
Back to Chapter 1
We have now considered three models of a falling penny | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Comparison of the penny models from Chapters 1, 20, and 21
Copyright 2018 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
m = UNITS.meter
s = UNITS.second
kg = UNITS.kilogram
Explanation: With air resistance
Next we'll add air resistance using the drag equation
I'll start by getting the units we'll need from Pint.
End of explanation
params = Params(height = 381 * m,
v_init = 0 * m / s,
g = 9.8 * m/s**2,
mass = 2.5e-3 * kg,
diameter = 19e-3 * m,
rho = 1.2 * kg/m**3,
v_term = 18 * m / s)
Explanation: Now I'll create a Params object to contain the quantities we need. Using a Params object is convenient for grouping the system parameters in a way that's easy to read (and double-check).
End of explanation
def make_system(params):
Makes a System object for the given conditions.
params: Params object
returns: System object
unpack(params)
area = np.pi * (diameter/2)**2
C_d = 2 * mass * g / (rho * area * v_term**2)
init = State(y=height, v=v_init)
t_end = 30 * s
return System(params, area=area, C_d=C_d,
init=init, t_end=t_end)
Explanation: Now we can pass the Params object make_system which computes some additional parameters and defines init.
make_system uses the given radius to compute area and the given v_term to compute the drag coefficient C_d.
End of explanation
system = make_system(params)
Explanation: Let's make a System
End of explanation
def slope_func(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object
returns: derivatives of y and v
y, v = state
rho, C_d, area = system.rho, system.C_d, system.area
mass = system.mass
g = system.g
f_drag = rho * v**2 * C_d * area / 2
a_drag = f_drag / mass
dydt = v
dvdt = -g + a_drag
return dydt, dvdt
Explanation: Here's the slope function, including acceleration due to gravity and drag.
End of explanation
slope_func(system.init, 0, system)
Explanation: As always, let's test the slope function with the initial conditions.
End of explanation
def event_func(state, t, system):
Return the height of the penny above the sidewalk.
y, v = state
return y
Explanation: We can use the same event function as in the previous chapter.
End of explanation
results, details = run_ode_solver(system, slope_func, events=event_func)
details
Explanation: And then run the simulation.
End of explanation
results
Explanation: Here are the results.
End of explanation
t_sidewalk = get_last_label(results)
Explanation: The final height is close to 0, as expected.
Interestingly, the final velocity is not exactly terminal velocity, which suggests that there are some numerical errors.
We can get the flight time from results.
End of explanation
def plot_position(results):
plot(results.y)
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
Explanation: Here's the plot of position as a function of time.
End of explanation
def plot_velocity(results):
plot(results.v, color='C1', label='v')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot_velocity(results)
Explanation: And velocity as a function of time:
End of explanation
g = 9.8
v_term = 18
t_end = 22.4
ts = linspace(0, t_end, 201)
model1 = -g * ts;
model2 = TimeSeries()
for t in ts:
v = -g * t
if v < -v_term:
model2[t] = -v_term
else:
model2[t] = v
results, details = run_ode_solver(system, slope_func, events=event_func, max_step=0.5)
model3 = results.v;
plot(ts, model1, label='model1', color='gray')
plot(model2, label='model2', color='C0')
plot(model3, label='model3', color='C1')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
plot(model2, label='model2', color='C0')
plot(results.v, label='model3', color='C1')
decorate(xlabel='Time (s)',
ylabel='Velocity (m/s)')
Explanation: From an initial velocity of 0, the penny accelerates downward until it reaches terminal velocity; after that, velocity is constant.
Back to Chapter 1
We have now considered three models of a falling penny:
In Chapter 1, we started with the simplest model, which includes gravity and ignores drag.
As an exercise in Chapter 1, we did a "back of the envelope" calculation assuming constant acceleration until the penny reaches terminal velocity, and then constant velocity until it reaches the sidewalk.
In this chapter, we model the interaction of gravity and drag during the acceleration phase.
We can compare the models by plotting velocity as a function of time.
End of explanation |
15,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$
\newcommand\HH{\mathbf{H}}
\newcommand\SO{\mathbf{S}}
\newcommand\Scat{\boldsymbol\Gamma}
\newcommand\ket[1]{|#1\rangle}
\newcommand\bra[1]{\langle#1|}
\newcommand\set[1]{{#1}}
\newcommand\eig{\epsilon}
$
In this example you will perform transport calculations by projecting scattering states onto molecular orbitals.
We will select Carbon chains as our electrodes and the C${60}$ fullerene as our molecule (it has nice symmetries).
The molecular projection method are created from a subset molecule space ($\set M$)
Step1: Now we have the molecule, all we need are some electrodes and the connection from the electrodes to the molecule.
This electrode is a 2x2 square lattice with transport along the $x$-direction.
Step2: Create the final device by making the electrode have 2 screening layers, then the C$_{60}$ and finally the right-electrode (equivalently setup to the left part).
Step3: The full device is now created and we simply need to create the electronic structure. | Python Code:
C60 = sisl.Geometry.read('C60.xyz')
# Calculate the nearest neighbour distance
dist = C60.distance(R=5)
C60.atom.atom[0] = sisl.Atom(6, R=dist[0] + 0.01)
print(C60)
Explanation: $
\newcommand\HH{\mathbf{H}}
\newcommand\SO{\mathbf{S}}
\newcommand\Scat{\boldsymbol\Gamma}
\newcommand\ket[1]{|#1\rangle}
\newcommand\bra[1]{\langle#1|}
\newcommand\set[1]{{#1}}
\newcommand\eig{\epsilon}
$
In this example you will perform transport calculations by projecting scattering states onto molecular orbitals.
We will select Carbon chains as our electrodes and the C${60}$ fullerene as our molecule (it has nice symmetries).
The molecular projection method are created from a subset molecule space ($\set M$):
\begin{equation}
\HH{\set M} \ket{i} = \eig_i^{\set M}\SO_{\set M} \ket{i}
\end{equation}
with the projectors orthogonalized through the Löwdin transformation
\begin{equation}
\ket{i'}=\SO^{1/2}\ket i
\end{equation}
Then the projection of the scattering states will read
\begin{equation}
\tilde\Scat = \sum\ket{i'}\bra{i'}\Scat \sum \ket{j'}\bra{j'}
\end{equation}
which completes the basis transformation.
These projectors can be performed in TBtrans by defining the $\set M$ region and defining which scattering matrices should be projected onto.
From the above few equations it should be obvious that to create such a projection it is necessary to define a device region where the scattering matrices are living only on the projected region (i.e. the molecule).
Instead of manually setting up the C$_{60}$ molecule we find the coordinates on this web-page (http://www.nanotube.msu.edu/fullerene/fullerene-isomers.html). The coordinates are stored in the C60.xyz file in this directory. Note that when reading this geometry it does not know the orbital distance so we have to calculate it.
End of explanation
elec = sisl.Geometry([0] * 3, sisl.Atom(6, R=1.), [1, 1, 10]).tile(2, 0).tile(2, 1)
elec.set_nsc(a=3, b=1)
H_elec = sisl.Hamiltonian(elec)
H_elec.construct(([0.1, 1.1], [0., -1]))
H_elec.write('ELEC.nc')
Explanation: Now we have the molecule, all we need are some electrodes and the connection from the electrodes to the molecule.
This electrode is a 2x2 square lattice with transport along the $x$-direction.
End of explanation
elec_x = elec.tile(3, 0)
# Translate to origo
C60 = C60.translate(-C60.center(what='xyz'))
C60 = C60.translate([-C60.xyz[:, 0].min(), 0, 0])
# Do trickery to make sure the coordinates are consecutive along x
C60.set_sc([C60.xyz[:, 0].max() + 1., 1., 1.])
device = elec_x.append(C60, 0).append(elec_x, 0)
Explanation: Create the final device by making the electrode have 2 screening layers, then the C$_{60}$ and finally the right-electrode (equivalently setup to the left part).
End of explanation
H = sisl.Hamiltonian(device)
H.construct(([0.1, 1.1], [0., -1]))
# Correct the C_60 couplings to something different
idx_C60 = np.arange(len(elec_x), len(elec_x) + len(C60), dtype=np.int32)
for ia in idx_C60:
idx = device.close(ia, R=[0.1, C60.maxR()])[1]
# On-site is already 0, so don't bother doing anything there
# Split idx into C60 couplings and chain couplings
for i in idx:
if i in idx_C60:
H[ia, i] = -1.5
else:
# Coupling to chain
# Since we are only looping atoms in C60
# we have to also set the coupling into C60
# (to assert Hermiticity)
H[ia, i] = 0.1
H[i, ia] = 0.1
H.write('DEVICE.nc')
Explanation: The full device is now created and we simply need to create the electronic structure.
End of explanation |
15,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ReadCSV
Read the csv file with acceloremeter information.
Convert the data in file to numpy array and then plot it.
Step1: Open the CSV file and transform to an array.
Also get number of samples
Step2: Convert data into separate variables (arrays)
Step3: Plotting graphs of variables
Step4: Explore a range
Visualize an specific range of time | Python Code:
import csv
import numpy as np
import matplotlib.pyplot as plt
Explanation: ReadCSV
Read the csv file with acceloremeter information.
Convert the data in file to numpy array and then plot it.
End of explanation
with open('../dataset/PhysicsToolboxSuite/walk_normal_hand_001.csv', 'rb') as csvfile:
data_reader = csv.reader(csvfile, delimiter = ',')
data = np.asarray(list(data_reader))
Explanation: Open the CSV file and transform to an array.
Also get number of samples
End of explanation
timevec_str = data[1:-1,0]
timevec = timevec_str.astype(np.float)
gFx_str = data[1:-1,1]
gFx = gFx_str.astype(np.float)
gFy_str = data[1:-1,2]
gFy = gFy_str.astype(np.float)
gFz_str = data[1:-1,3]
gFz = gFz_str.astype(np.float)
TgF_str = data[1:-1,4]
TgF = TgF_str.astype(np.float)
NumSamples = len(timevec);
TimeDuration = timevec[NumSamples-1] - timevec[0];
msg = 'Number of samples = ' + repr(NumSamples)
print msg
msg = 'Duration of sample = ' + repr(TimeDuration) + ' seconds'
print msg
print ' '
print timevec
print gFx
print gFy
print gFz
print TgF
Explanation: Convert data into separate variables (arrays)
End of explanation
fig = plt.figure()
plt.figure(figsize=(16,8))
plt.plot(timevec,TgF,label="TgF")
plt.grid(linestyle='-', linewidth=1)
plt.legend(loc='upper right')
plt.show()
fig = plt.figure()
plt.figure(figsize=(16,8))
plt.plot(timevec,gFx,label="gFx",color="blue")
plt.plot(timevec,gFy,label="gFy",color="red")
plt.plot(timevec,gFz,label="gFz",color="green")
plt.grid(linestyle='-', linewidth=1)
plt.legend(loc='upper right')
plt.show()
Explanation: Plotting graphs of variables
End of explanation
# Parameters to setup
# idx_ini = 0
# idx_end = NumSamples
idx_ini = 0
idx_end = 120
# Extract range
timevec_zoom = timevec[idx_ini:idx_end]
gFx_zoom = gFx[idx_ini:idx_end]
gFy_zoom = gFy[idx_ini:idx_end]
gFz_zoom = gFz[idx_ini:idx_end]
TgF_zoom = TgF[idx_ini:idx_end]
NumSamples_zoom = len(timevec_zoom);
TimeDuration_zoom = timevec[idx_end-1] - timevec[idx_ini];
msg = 'Number of samples of zoom = ' + repr(NumSamples_zoom)
print msg
msg = 'Duration of sample of zoom = ' + repr(TimeDuration_zoom) + ' seconds'
print msg
fig = plt.figure()
plt.figure(figsize=(16,8))
plt.plot(timevec_zoom,TgF_zoom,'b-o',label="TgF")
plt.grid(linestyle='-', linewidth=1)
plt.legend(loc='upper right')
plt.show()
fig = plt.figure()
plt.figure(figsize=(16,8))
plt.plot(timevec_zoom,gFx_zoom,'b-o',label="gFx",color="blue")
plt.plot(timevec_zoom,gFy_zoom,'r-o',label="gFy",color="red")
plt.plot(timevec_zoom,gFz_zoom,'g-o',label="gFz",color="green")
plt.grid(linestyle='-', linewidth=1)
plt.legend(loc='upper right')
plt.show()
Explanation: Explore a range
Visualize an specific range of time
End of explanation |
15,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy
NumPy is a Linear Algebra Library for Python.
NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of positive integers. In NumPy dimensions are called axes. The number of axes is rank.
For example, the coordinates of a point in 3D space [1, 2, 1] is an array of rank 1, because it has one axis. That axis has a length of 3. In the example pictured below, the array has rank 2 (it is 2-dimensional).
Numpy is also incredibly fast, as it has bindings to C libraries.
For easy installing Numpy
Step1: zeros , ones and eye
np.zeros
Return a new array of given shape and type, filled with zeros.
Step2: ones
Return a new array of given shape and type, filled with ones.
Step3: eye
Return a 2-D array with ones on the diagonal and zeros elsewhere.
Step4: linspace
Returns num evenly spaced samples, calculated over the interval [start, stop].
Step5: Random number and matrix
rand
Random values in a given shape.
Step6: randn
Return a sample (or samples) from the "standard normal" distribution.
andom.standard_normal Similar, but takes a tuple as its argument.
Step7: random
Return random floats in the half-open interval [0.0, 1.0).
Step8: randint
Return n random integers (by default one integer) from low (inclusive) to high (exclusive).
Step9: Shape and Reshape
shape return the shape of data and reshape returns an array containing the same data with a new shape
Step10: Basic Operation
Element wise product and matrix product
Step11: min max argmin argmax mean
Step12: Universal function
numpy also has some funtion for mathmatical operation like exp, log, sqrt, abs and etc .
for find more function click here
Step13: dtype
Step14: No copy & Shallow copy & Deep copy
No copy
###### Simple assignments make no copy of array objects or of their data.
Step15: Shallow copy
Different array objects can share the same data. The view method creates a new array object that looks at the same data.
Step16: Deep copy
The copy method makes a complete copy of the array and its data.
Step17: Broadcating
###### One of important concept to understand numpy is Broadcasting
It's very useful for performancing mathmaica operation beetween arrays of different shape.
Step18: If you still doubt Why we use Python and NumPy see it. 😉
Step19: I tried to write essential things in numpy that you can start to code and enjoy it but there are many function that i don't write in this book if you neet more informatino click here
Pandas
pandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
For easy installing Pandas
bash
sudo pip3 install pandas
Step20: Series
Step21: Dataframe
Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure
Step22: Selection
Step23: creating new column
Step24: removing a column
Step25: Selcting row
Step26: Conditional Selection
Step27: Multi-Index and Index Hierarchy
Step28: Input and output
Step29: csv is one of the most important format but Pandas compatible with many other format like html table , sql, json and etc.
Mising data (NaN)
Step30: Concating merging and ...
Step31: Concatenation
Step32: Mergeing
Step33: Joining | Python Code:
import numpy as np
a = [1,2,3]
a
b = np.array(a)
b
np.arange(1, 10)
np.arange(1, 10, 2)
Explanation: NumPy
NumPy is a Linear Algebra Library for Python.
NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of positive integers. In NumPy dimensions are called axes. The number of axes is rank.
For example, the coordinates of a point in 3D space [1, 2, 1] is an array of rank 1, because it has one axis. That axis has a length of 3. In the example pictured below, the array has rank 2 (it is 2-dimensional).
Numpy is also incredibly fast, as it has bindings to C libraries.
For easy installing Numpy:
bash
sudo pip3 install numpy
NumPy array
End of explanation
np.zeros(2, dtype=float)
np.zeros((2,3))
Explanation: zeros , ones and eye
np.zeros
Return a new array of given shape and type, filled with zeros.
End of explanation
np.ones(3, )
Explanation: ones
Return a new array of given shape and type, filled with ones.
End of explanation
np.eye(3)
Explanation: eye
Return a 2-D array with ones on the diagonal and zeros elsewhere.
End of explanation
np.linspace(1, 11, 3)
Explanation: linspace
Returns num evenly spaced samples, calculated over the interval [start, stop].
End of explanation
np.random.rand(2)
np.random.rand(2,3,4)
Explanation: Random number and matrix
rand
Random values in a given shape.
End of explanation
np.random.randn(2,3)
Explanation: randn
Return a sample (or samples) from the "standard normal" distribution.
andom.standard_normal Similar, but takes a tuple as its argument.
End of explanation
np.random.random()
Explanation: random
Return random floats in the half-open interval [0.0, 1.0).
End of explanation
np.random.randint(1,50,10)
np.random.randint(1,40)
Explanation: randint
Return n random integers (by default one integer) from low (inclusive) to high (exclusive).
End of explanation
zero = np.zeros([3,4])
print(zero , ' ' ,'shape of a :' , zero.shape)
zero = zero.reshape([2,6])
print()
print(zero)
Explanation: Shape and Reshape
shape return the shape of data and reshape returns an array containing the same data with a new shape
End of explanation
number = np.array([[1,2,],
[3,4]])
number2 = np.array([[1,3],[2,1]])
print('element wise product :\n',number * number2 )
print('matrix product :\n',number.dot(number2)) ## also can use : np.dot(number, number2)
Explanation: Basic Operation
Element wise product and matrix product
End of explanation
numbers = np.random.randint(1,100, 10)
print(numbers)
print('max is :', numbers.max())
print('index of max :', numbers.argmax())
print('min is :', numbers.min())
print('index of min :', numbers.argmin())
print('mean :', numbers.mean())
Explanation: min max argmin argmax mean
End of explanation
number = np.arange(1,10).reshape(3,3)
print(number)
print()
print('exp:\n', np.exp(number))
print()
print('sqrt:\n',np.sqrt(number))
Explanation: Universal function
numpy also has some funtion for mathmatical operation like exp, log, sqrt, abs and etc .
for find more function click here
End of explanation
numbers.dtype
Explanation: dtype
End of explanation
number = np.arange(0,20)
number2 = number
print (number is number2 , id(number), id(number2))
print(number)
number2.shape = (4,5)
print(number)
Explanation: No copy & Shallow copy & Deep copy
No copy
###### Simple assignments make no copy of array objects or of their data.
End of explanation
number = np.arange(0,20)
number2 = number.view()
print (number is number2 , id(number), id(number2))
number2.shape = (5,4)
print('number2 shape:', number2.shape,'\nnumber shape:', number.shape)
print('befor:', number)
number2[0][0] = 2222
print()
print('after:', number)
Explanation: Shallow copy
Different array objects can share the same data. The view method creates a new array object that looks at the same data.
End of explanation
number = np.arange(0,20)
number2 = number.copy()
print (number is number2 , id(number), id(number2))
print('befor:', number)
number2[0] = 10
print()
print('after:', number)
print()
print('number2:',number2)
Explanation: Deep copy
The copy method makes a complete copy of the array and its data.
End of explanation
number = np.arange(1,11)
num = 2
print(' number =', number)
print('\n number .* num =',number * num)
number = np.arange(1,10).reshape(3,3)
number2 = np.arange(1,4).reshape(1,3)
number * number2
number = np.array([1,2,3])
print('number =', number)
print('\nnumber =', number + 100)
number = np.arange(1,10).reshape(3,3)
number2 = np.arange(1,4)
print('number: \n', number)
add = number + number2
print()
print('number2: \n ', number2)
print()
print('add: \n', add)
Explanation: Broadcating
###### One of important concept to understand numpy is Broadcasting
It's very useful for performancing mathmaica operation beetween arrays of different shape.
End of explanation
from time import time
a = np.random.rand(8000000, 1)
c = 0
tic = time()
for i in range(len(a)):
c +=(a[i][0] * a[i][0])
print ('output1:', c)
tak = time()
print('multiply 2 matrix with loop: ', tak - tic)
tic = time()
print('output2:', np.dot(a.T, a))
tak = time()
print('multiply 2 matrix with numpy func: ', tak - tic)
Explanation: If you still doubt Why we use Python and NumPy see it. 😉
End of explanation
import pandas as pd
Explanation: I tried to write essential things in numpy that you can start to code and enjoy it but there are many function that i don't write in this book if you neet more informatino click here
Pandas
pandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
For easy installing Pandas
bash
sudo pip3 install pandas
End of explanation
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array([10,20,30])
d = {'a':10,'b':20,'c':30}
pd.Series(data=my_list)
pd.Series(data=my_list,index=labels)
pd.Series(d)
Explanation: Series
End of explanation
dataframe = pd.DataFrame(np.random.randn(5,4),columns=['A','B','V','D'])
dataframe.head()
Explanation: Dataframe
Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure
End of explanation
dataframe['A']
dataframe[['A', 'D']]
Explanation: Selection
End of explanation
dataframe['E'] = dataframe['A'] + dataframe['B']
dataframe
Explanation: creating new column
End of explanation
dataframe.drop('E', axis=1)
dataframe
dataframe.drop('E', axis=1, inplace=True)
dataframe
Explanation: removing a column
End of explanation
dataframe.loc[0]
dataframe.iloc[0]
dataframe.loc[0 , 'A']
dataframe.loc[[0,2],['A', 'C']]
Explanation: Selcting row
End of explanation
dataframe > 0.3
dataframe[dataframe > 0.3 ]
dataframe[dataframe['A']>0.3]
dataframe[dataframe['A']>0.3]['B']
dataframe[(dataframe['A']>0.5) & (dataframe['C'] > 0)]
Explanation: Conditional Selection
End of explanation
layer1 = ['g1','g1','g1','g2','g2','g2']
layer2 = [1,2,3,1,2,3]
hier_index = list(zip(layer1,layer2))
hier_index = pd.MultiIndex.from_tuples(hier_index)
hier_index
dataframe2 = pd.DataFrame(np.random.randn(6,2),index=hier_index,columns=['A','B'])
dataframe2
dataframe2.loc['g1']
dataframe2.loc['g1'].loc[1]
Explanation: Multi-Index and Index Hierarchy
End of explanation
titanic = pd.read_csv('Datasets/titanic.csv')
pd.read
titanic.head()
titanic.drop('Name', axis=1 , inplace = True)
titanic.head()
titanic.to_csv('Datasets/titanic_drop_names.csv')
Explanation: Input and output
End of explanation
titanic.head()
titanic.dropna()
titanic.dropna(axis=1)
titanic.fillna('Fill NaN').head()
Explanation: csv is one of the most important format but Pandas compatible with many other format like html table , sql, json and etc.
Mising data (NaN)
End of explanation
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
df1
df2
df3
Explanation: Concating merging and ...
End of explanation
frames = [df1, df2, df3 ]
pd.concat(frames)
#pd.concat(frames, ignore_index=True)
pd.concat(frames, axis=1)
df1.append(df2)
Explanation: Concatenation
End of explanation
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
pd.merge(left, right, on= 'key')
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
pd.merge(left, right, on=['key1', 'key2'])
pd.merge(left, right, how='outer', on=['key1', 'key2'])
pd.merge(left, right, how='left', on=['key1', 'key2'])
pd.merge(left, right, how='right', on=['key1', 'key2'])
Explanation: Mergeing
End of explanation
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left
right
left.join(right)
Explanation: Joining
End of explanation |
15,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='Urbanization_Using_NDBI_top'></a>
Urbanization Using NDBI
<hr>
Background
Among the many urbanization indices, the Normalized Difference Built-Up Index (NDBI) is one of the most commonly used. This notebook shows how to use NDBI in the context of the Open Data Cube.
The formula for NDBI for Landsat is as follows
Step1: <span id="Urbanization_Using_NDBI_plat_prod">Choose Platform and Product ▴</span>
Step2: Choose the platforms and products
Step3: <span id="Urbanization_Using_NDBI_define_extents">Define the Extents of the Analysis ▴</span>
Step4: Visualize the selected area
Step5: <span id="Urbanization_Using_NDBI_retrieve_data">Load Data from the Data Cube ▴</span>
Step6: <span id="Urbanization_Using_NDBI_rgb">Show RGB Representation of the Area ▴</span>
Step7: <span id="Urbanization_Using_NDBI_analysis">Urbanization Analysis ▴</span>
NDWI, NDVI, NDBI
You will very rarely have urban classification and water classifications apply to the same pixel. For urban analysis, it may make sense to compute not just urban classes, but classes that are unlikely to co-occur with urbanization, such as vegetation (e.g. NDVI) or water (e.g. NDWI).
Step8: Merge into one Dataset
If your data-arrays share the same set of coordinates, and you feel that you'll be using these values together in the future, you should consider merging them into an xarray.Dataset.
Step9: Building a False Color Composite
If you have three lowly correlated measurements, place the measurement on red, green, and blue channels and visualize them.
Step10: Analyze The False Color Image
Values that adhere strongly to individual classes adhere to their own color channel. In this example, NDVI adheres to green, NDWI adheres to blue, and NDBI adheres to red.
Validate urbanization using other imagery
Double check results using high-resolution imagery. Compare to the false color mosaic
<br> | Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import matplotlib.pyplot as plt
import xarray as xr
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.dc_rgb import rgb
from utils.data_cube_utilities.urbanization import NDBI
from utils.data_cube_utilities.vegetation import NDVI
from utils.data_cube_utilities.dc_water_classifier import NDWI
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
dc = datacube.Datacube()
Explanation: <a id='Urbanization_Using_NDBI_top'></a>
Urbanization Using NDBI
<hr>
Background
Among the many urbanization indices, the Normalized Difference Built-Up Index (NDBI) is one of the most commonly used. This notebook shows how to use NDBI in the context of the Open Data Cube.
The formula for NDBI for Landsat is as follows:
$$ NDBI = \frac{(SWIR - NIR)}{(SWIR + NIR)}$$
Note that for arid environments, the Dry Built-Up Index (DBI) may perform better than NDBI, which struggles with arid environments and some kinds of buildings. DBI requires the TIR band of Landsat 8.
<br>
Index
Import Dependencies and Connect to the Data Cube
Choose Platform and Product
Define the Extents of the Analysis
Load Data from the Data Cube
Show RGB Representation of the Area
Urbanization Analysis
<span id="Urbanization_Using_NDBI_import">Import Dependencies and Connect to the Data Cube ▴</span>
End of explanation
# Get available products
products_info = dc.list_products()
print("LANDSAT 7 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_7"]
print("LANDSAT 8 Products:")
products_info[["platform", "name"]][products_info.platform == "LANDSAT_8"]
Explanation: <span id="Urbanization_Using_NDBI_plat_prod">Choose Platform and Product ▴</span>
End of explanation
# These are the platforms (satellites) and products (datacube sets)
# used for this demonstration.
platform = 'LANDSAT_8'
product = 'ls8_usgs_sr_scene'
collection = 'c1'
level = 'l2'
Explanation: Choose the platforms and products
End of explanation
# Kumasi, Ghana
# lat = (6.597724,6.781856)
# lon = (-1.727843,-1.509147)
# Accra, Ghana
lat = (5.5162, 5.6338)
lon = (-0.2657, -0.1373)
time_range = ("2019-01-01", "2019-12-31")
Explanation: <span id="Urbanization_Using_NDBI_define_extents">Define the Extents of the Analysis ▴</span>
End of explanation
display_map(lat, lon)
Explanation: Visualize the selected area
End of explanation
desired_bands = ['red','green','nir','swir1', 'swir2', 'pixel_qa'] # needed by ndvi, ndwi, ndbi and cloud masking
desired_bands = desired_bands + ['blue'] # blue is needed for a true color visualization purposes
landsat_ds = dc.load(product = product,
platform = platform,
lat = lat,
lon = lon,
time = time_range,
measurements = desired_bands,
dask_chunks={'time':1, 'latitude':1000, 'longitude':1000})
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform,
collection=collection, level=level)
landsat_ds = landsat_ds.where(clean_mask)
Explanation: <span id="Urbanization_Using_NDBI_retrieve_data">Load Data from the Data Cube ▴</span>
End of explanation
median_composite = landsat_ds.median('time')
plt.figure(figsize=(8,8))
median_composite[['red', 'green', 'blue']].to_array().plot.imshow(vmin=0, vmax=2500)
plt.show()
Explanation: <span id="Urbanization_Using_NDBI_rgb">Show RGB Representation of the Area ▴</span>
End of explanation
ndbi = NDBI(median_composite) # Urbanization
ndvi = NDVI(median_composite) # Dense Vegetation
ndwi = NDWI(median_composite) # High Concentrations of Water
plt.figure(figsize=(8,8))
ndvi.plot(cmap = "Greens")
plt.show()
plt.figure(figsize=(8,8))
ndwi.plot(cmap = "Blues")
plt.show()
plt.figure(figsize=(8,8))
ndbi.plot(cmap = "Reds")
plt.show()
Explanation: <span id="Urbanization_Using_NDBI_analysis">Urbanization Analysis ▴</span>
NDWI, NDVI, NDBI
You will very rarely have urban classification and water classifications apply to the same pixel. For urban analysis, it may make sense to compute not just urban classes, but classes that are unlikely to co-occur with urbanization, such as vegetation (e.g. NDVI) or water (e.g. NDWI).
End of explanation
urbanization_dataset = xr.merge((ndvi.rename('NDVI'), ndwi.rename('NDWI'), ndbi.rename('NDBI')))
urbanization_dataset
Explanation: Merge into one Dataset
If your data-arrays share the same set of coordinates, and you feel that you'll be using these values together in the future, you should consider merging them into an xarray.Dataset.
End of explanation
plt.figure(figsize=(8,8))
urbanization_dataset[["NDBI", "NDVI", "NDWI"]].to_array().plot.imshow(vmin=0, vmax=1)
plt.show()
Explanation: Building a False Color Composite
If you have three lowly correlated measurements, place the measurement on red, green, and blue channels and visualize them.
End of explanation
display_map(latitude = lat ,longitude = lon)
Explanation: Analyze The False Color Image
Values that adhere strongly to individual classes adhere to their own color channel. In this example, NDVI adheres to green, NDWI adheres to blue, and NDBI adheres to red.
Validate urbanization using other imagery
Double check results using high-resolution imagery. Compare to the false color mosaic
<br>
End of explanation |
15,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are
Step1: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
Step2: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
Step3: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
Step4: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail
Step5: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab
Step6: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include
Step7: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
Step8: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things
Step9: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we
Step10: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps.
Step11: From the plot you can see several things | Python Code:
%matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
Explanation: Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are:
While it has a Python interface, much of the low-level computation is implemented in C/C++, making it run much faster than a native Python solution.
Many common aspects of neural networks such as computation of various losses and a variety of modern optimization techniques are implemented as built in methods, reducing their implementation to a single line of code. This also helps in development and testing of various solutions, as you can easily swap in and try various solutions without having to write all the code by hand.
You can get more details about various popular machine learning libraries in this comparison.
To test our basic network, we will use the Boston Housing Dataset, which represents data on 506 houses in Boston across 14 different features. One of the features is the median value of the house in $1000’s. This is a common data set for testing regression performance of machine learning algorithms. All 14 features are continuous values, making them easy to plug directly into a neural network (after normalizing ofcourse!). The common goal is to predict the median house value using the other columns as features.
This lab will conclude with two assignments:
Assignment 1 (at bottom of this notebook) asks you to experiment with various regularization parameters to reduce overfitting and improve the results of the model.
Assignment 2 (in the next notebook) asks you to take our regression problem and convert it to a classification problem.
Let's start by importing some of the libraries we will use for this tutorial:
End of explanation
#load data from scikit-learn library
dataset = load_boston()
#load data as DataFrame
houses = pd.DataFrame(dataset.data, columns=dataset.feature_names)
#add target data to DataFrame
houses['target'] = dataset.target
#print first 5 entries of data
print houses.head()
Explanation: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
End of explanation
print dataset['DESCR']
Explanation: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
End of explanation
# Create a dataset of correlations between house features
corrmat = houses.corr()
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(9, 6))
# Draw the heatmap using seaborn
sns.set_context("notebook", font_scale=0.7, rc={"lines.linewidth": 1.5})
sns.heatmap(corrmat, annot=True, square=True)
f.tight_layout()
Explanation: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
End of explanation
sns.jointplot(houses['target'], houses['RM'], kind='hex')
sns.jointplot(houses['target'], houses['LSTAT'], kind='hex')
Explanation: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail:
End of explanation
# convert housing data to numpy format
houses_array = houses.as_matrix().astype(float)
# split data into feature and target sets
X = houses_array[:, :-1]
y = houses_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# split data into training and test sets
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
Explanation: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab:
We will first re-split the data into a feature set (X) and a target set (y)
Then we will normalize the feature set so that the values range from 0 to 1
Finally, we will split both data sets into a training and test set.
End of explanation
# helper variables
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = 1
# Hyper-parameters
batch_size = 50
num_hidden_1 = 16
num_hidden_2 = 16
learning_rate = 0.0001
training_epochs = 200
dropout_keep_prob = 1.0 # set to no dropout by default
# variable to control the resolution at which the training results are stored
display_step = 1
Explanation: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include:
batch size, which sets how many training samples are used at a time
learning rate which controls how quickly the gradient descent algorithm works
training epochs which sets how many rounds of training occurs
dropout keep probability, a regularization technique which controls how many neurons are 'dropped' randomly during each training step (note in Tensorflow this is specified as the 'keep probability' from 0 to 1, with 0 representing all neurons dropped, and 1 representing all neurons kept). You can read more about dropout here.
End of explanation
def accuracy(predictions, targets):
error = np.absolute(predictions.reshape(-1) - targets)
return np.mean(error)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
Explanation: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
End of explanation
'''First we create a variable to store our graph'''
graph = tf.Graph()
'''Next we build our neural network within this graph variable'''
with graph.as_default():
'''Our training data will come in as x feature data and
y target data. We need to create tensorflow placeholders
to capture this data as it comes in'''
x = tf.placeholder(tf.float32, shape=(None, num_features))
_y = tf.placeholder(tf.float32, shape=(None))
'''Another placeholder stores the hyperparameter
that controls dropout'''
keep_prob = tf.placeholder(tf.float32)
'''Finally, we convert the test and train feature data sets
to tensorflow constants so we can use them to generate
predictions on both data sets'''
tf_X_test = tf.constant(X_test, dtype=tf.float32)
tf_X_train = tf.constant(X_train, dtype=tf.float32)
'''Next we create the parameter variables for the model.
Each layer of the neural network needs it's own weight
and bias variables which will be tuned during training.
The sizes of the parameter variables are determined by
the number of neurons in each layer.'''
W_fc1 = weight_variable([num_features, num_hidden_1])
b_fc1 = bias_variable([num_hidden_1])
W_fc2 = weight_variable([num_hidden_1, num_hidden_2])
b_fc2 = bias_variable([num_hidden_2])
W_fc3 = weight_variable([num_hidden_2, num_outputs])
b_fc3 = bias_variable([num_outputs])
'''Next, we define the forward computation of the model.
We do this by defining a function model() which takes in
a set of input data, and performs computations through
the network until it generates the output.'''
def model(data, keep):
# computing first hidden layer from input, using relu activation function
fc1 = tf.nn.sigmoid(tf.matmul(data, W_fc1) + b_fc1)
# adding dropout to first hidden layer
fc1_drop = tf.nn.dropout(fc1, keep)
# computing second hidden layer from first hidden layer, using relu activation function
fc2 = tf.nn.sigmoid(tf.matmul(fc1_drop, W_fc2) + b_fc2)
# adding dropout to second hidden layer
fc2_drop = tf.nn.dropout(fc2, keep)
# computing output layer from second hidden layer
# the output is a single neuron which is directly interpreted as the prediction of the target value
fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3
# the output is returned from the function
return fc3
'''Next we define a few calls to the model() function which
will return predictions for the current batch input data (x),
as well as the entire test and train feature set'''
prediction = model(x, keep_prob)
test_prediction = model(tf_X_test, 1.0)
train_prediction = model(tf_X_train, 1.0)
'''Finally, we define the loss and optimization functions
which control how the model is trained.
For the loss we will use the basic mean square error (MSE) function,
which tries to minimize the MSE between the predicted values and the
real values (_y) of the input dataset.
For the optimization function we will use basic Gradient Descent (SGD)
which will minimize the loss using the specified learning rate.'''
loss = tf.reduce_mean(tf.square(tf.sub(prediction, _y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
'''We also create a saver variable which will allow us to
save our trained model for later use'''
saver = tf.train.Saver()
Explanation: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things:
describes the architecture of the network, including how many layers it has and how many neurons are in each layer
initializes all the parameters of the network
describes the 'forward' calculation of the network, or how input data is passed through the network layer by layer until it reaches the result
defines the loss function which describes how well the model is performing
specifies the optimization function which dictates how the parameters are tuned in order to minimize the loss
Once this graph is defined, we can work with it by 'executing' it on sets of training data and 'calling' different parts of the graph to get back results. Every time the graph is executed, Tensorflow will only do the minimum calculations necessary to generate the requested results. This makes Tensorflow very efficient, and allows us to structure very complex models while only testing and using certain portions at a time. In programming language theory, this type of programming is called 'lazy evaluation'.
End of explanation
# create an array to store the results of the optimization at each epoch
results = []
'''First we open a session of Tensorflow using our graph as the base.
While this session is active all the parameter values will be stored,
and each step of training will be using the same model.'''
with tf.Session(graph=graph) as session:
'''After we start a new session we first need to
initialize the values of all the variables.'''
tf.initialize_all_variables().run()
print('Initialized')
'''Now we iterate through each training epoch based on the hyper-parameter set above.
Each epoch represents a single pass through all the training data.
The total number of training steps is determined by the number of epochs and
the size of mini-batches relative to the size of the entire training set.'''
for epoch in range(training_epochs):
'''At the beginning of each epoch, we create a set of shuffled indexes
so that we are using the training data in a different order each time'''
indexes = range(num_samples)
random.shuffle(indexes)
'''Next we step through each mini-batch in the training set'''
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
'''We subset the feature and target training sets to create each mini-batch'''
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
'''Then, we create a 'feed dictionary' that will feed this data,
along with any other hyper-parameters such as the dropout probability,
to the model'''
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
'''Finally, we call the session's run() function, which will feed in
the current training data, and execute portions of the graph as necessary
to return the data we ask for.
The first argument of the run() function is a list specifying the
model variables we want it to compute and return from the function.
The most important is 'optimizer' which triggers all calculations necessary
to perform one training step. We also include 'loss' and 'prediction'
because we want these as ouputs from the function so we can keep
track of the training process.
The second argument specifies the feed dictionary that contains
all the data we want to pass into the model at each training step.'''
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
'''At the end of each epoch, we will calcule the error of predictions
on the full training and test data set. We will then store the epoch number,
along with the mini-batch, training, and test accuracies to the 'results' array
so we can visualize the training process later. How often we save the data to
this array is specified by the display_step variable created above'''
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
'''Once training is complete, we will save the trained model so that we can use it later'''
save_path = saver.save(session, "model_houses.ckpt")
print("Model saved in file: %s" % save_path)
Explanation: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we:
Feed in a new set of training data. Remember that with SGD we only have to feed in a small set of data at a time. The size of each batch of training data is determined by the 'batch_size' hyper-parameter specified above.
Call the optimizer function by asking tensorflow to return the model's 'optimizer' variable. This starts a chain reaction in Tensorflow that executes all the computation necessary to train the model. The optimizer function itself will compute the gradients in the model and modify the weight and bias parameters in a way that minimizes the overall loss. Because it needs this loss to compute the gradients, it will also trigger the loss function, which will in turn trigger the model to compute predictions based on the input data. This sort of chain reaction is at the root of the 'lazy evaluation' model used by Tensorflow.
End of explanation
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Minimum test loss:", np.min(df["test_acc"])
Explanation: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps.
End of explanation
# To fix overfitting, we can modify dropout or artificially expand the training data,
# using cross-entropy..tried increasing the number of epochs and that made the error graph
# look better...
# maybe use algorithm or whatever to optimize the weights and so on?
# helper variables
# convert housing data to numpy format
houses_array = houses.as_matrix().astype(float)
# split data into feature and target sets
X = houses_array[:, :-1]
y = houses_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# split data into training and test sets
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = 1
# Hyper-parameters
batch_size = 100 #INCREASING the batch size helps regularize the learning
num_hidden_1 = 16
num_hidden_2 = 16
learning_rate = 0.0001
training_epochs = 200
dropout_keep_prob = 0.6 # reducing the keep prob of dropout closer to 0 reduces the error, yay
# dropout! just gotta make sure also that this doesn't cause the graph to shift away from
# the right values
# variable to control the resolution at which the training results are stored
display_step = 1
# create an array to store the results of the optimization at each epoch
results = []
'''First we open a session of Tensorflow using our graph as the base.
While this session is active all the parameter values will be stored,
and each step of training will be using the same model.'''
with tf.Session(graph=graph) as session:
'''After we start a new session we first need to
initialize the values of all the variables.'''
tf.initialize_all_variables().run()
print('Initialized')
'''Now we iterate through each training epoch based on the hyper-parameter set above.
Each epoch represents a single pass through all the training data.
The total number of training steps is determined by the number of epochs and
the size of mini-batches relative to the size of the entire training set.'''
for epoch in range(training_epochs):
'''At the beginning of each epoch, we create a set of shuffled indexes
so that we are using the training data in a different order each time'''
indexes = range(num_samples)
random.shuffle(indexes)
'''Next we step through each mini-batch in the training set'''
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
'''We subset the feature and target training sets to create each mini-batch'''
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
'''Then, we create a 'feed dictionary' that will feed this data,
along with any other hyper-parameters such as the dropout probability,
to the model'''
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
'''Finally, we call the session's run() function, which will feed in
the current training data, and execute portions of the graph as necessary
to return the data we ask for.
The first argument of the run() function is a list specifying the
model variables we want it to compute and return from the function.
The most important is 'optimizer' which triggers all calculations necessary
to perform one training step. We also include 'loss' and 'prediction'
because we want these as ouputs from the function so we can keep
track of the training process.
The second argument specifies the feed dictionary that contains
all the data we want to pass into the model at each training step.'''
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
'''At the end of each epoch, we will calcule the error of predictions
on the full training and test data set. We will then store the epoch number,
along with the mini-batch, training, and test accuracies to the 'results' array
so we can visualize the training process later. How often we save the data to
this array is specified by the display_step variable created above'''
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
'''Once training is complete, we will save the trained model so that we can use it later'''
save_path = saver.save(session, "model_houses.ckpt")
print("Model saved in file: %s" % save_path)
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Minimum test loss:", np.min(df["test_acc"])
Explanation: From the plot you can see several things:
the error on the training data smoothly improves throughout the training, which is to be expected from the gradient descent algorithm
the error of each mini-batch is more noisy than the entire training set (which is also to be expected since we are only using a portion of the data each time) but in general follows the same trajectory
the error over the training set bottoms out around the 120th epoch, which might represent the best model fit for this dataset
All of this is to be expected, however the most important thing to notice with this plot is the error over the test set (which is actually measuring the generalized performance of the model). You can see that for the first 35 or so epochs the error over the test set improves in pace with the error over the training set. This means that the tuning of the model is fitting the underlying structures in both datasets. However, after the 70th epoch, the error over the test starts to move back up. This is a very common indication that overfitting of the training set has occured. After this point, further tuning of the model is representing particular features of the training set itself (such as it's particular error or noise), which do not generalize well to other data not seen by the training process. This shows why it is so important to use a separate testing set to evaluate a model, since otherwise it would be impossible to see exactly where this point of overfitting occurs.
Assignment - part 1
There are several common strategies for addressing overfitting which we will cover in class and are also covered here. Go back to the neural network and experiment with different settings for some of the hyper-parameters to see if you can fix this 'double-dip' in the test set error. One approach might be to reduce the number of layers in the network or the number of neurons in each layer, since a simpler model is less likely to overfit. Another approach might be to increase the amount of dropout, since this will artificially limit the complexity of the model.
Bonus: there is one fundamental issue with how I'm using the data from the very beginning which is also contributing to the overfitting problem in this particular case. Can you think of something we can do to the data before training which would ensure that the training and test sets are more similar?
Once you fix the overfitting problem and achieve a minimum test loss of less than 6.0, submit your work as a pull request back to the main project.
End of explanation |
15,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
第8章 機械学習の適用1 - 感情分析
https
Step1: 8.2 BoWモデルの紹介
BoW(Bag-of-Words)
ドキュメントの集合全体から、たとえば単語という一意なトークン(token)からなる語彙(vocabulary)を作成する
各ドキュメントでの各単語の出現回数を含んだ特徴ベクトルを構築する
疎ベクトル(sparse vector)
8.2.1 単語を特徴ベクトルに変換する
生の出現頻度(raw term frequencies)
Step2: 8.2.2 TF-IDFを使って単語の関連性を評価する
TF-IDF(Term Frequency-Inverse Document Frequency)
TF
Step3: 8.2.3 テキストデータのクレンジング
Step4: 8.2.4 ドキュメントをトークン化する
トークン化(tokenize)
Porterステミング(Porter stemming)アルゴリズム
Step5: 8.2.5 ドキュメントを分類するロジスティック回帰モデルのトレーニング
Step6: 8.3 さらに大規模なデータの処理 | Python Code:
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
# データを読み込む
import pyprind
import pandas as pd
import os
pbar = pyprind.ProgBar(50000)
labels = {'pos': 1, 'neg': 0}
df = pd.DataFrame()
for set in ('test', 'train'):
for label in ('pos', 'neg'):
path = os.path.join('.', 'aclImdb', set, label)
for file in os.listdir(path):
with open(os.path.join(path, file), encoding='utf-8') as f:
txt = f.read()
df = df.append([[txt, labels[label]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
import numpy as np
np.random.seed(0)
# 行の順番をシャッフル
df = df.reindex(np.random.permutation(df.index))
# CSV に保存
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
Explanation: 第8章 機械学習の適用1 - 感情分析
https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch08/ch08.ipynb
自然言語処理(Natural Language Processing: NLP)
感情(センチメント)分析(sentiment analysis)
極性(polarity)
8.1 IMDbの映画レビューデータセットの取得
意見マイニング(opinion mining)
IMDb(Internet Movie Database)
http://ai.stanford.edu/~amaas/data/sentiment/
http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
以下のコマンドでダウンロード
sh
$ curl -O http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
$ tar xfz aclImdb_v1.tar.gz
進行状況を見るために pyprind をインストール
sh
$ pip install pyprind
End of explanation
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
# 語彙の中身を出力
print(count.vocabulary_)
# 特徴ベクトルを出力
print(bag.toarray())
Explanation: 8.2 BoWモデルの紹介
BoW(Bag-of-Words)
ドキュメントの集合全体から、たとえば単語という一意なトークン(token)からなる語彙(vocabulary)を作成する
各ドキュメントでの各単語の出現回数を含んだ特徴ベクトルを構築する
疎ベクトル(sparse vector)
8.2.1 単語を特徴ベクトルに変換する
生の出現頻度(raw term frequencies): tf(t, d)
End of explanation
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
Explanation: 8.2.2 TF-IDFを使って単語の関連性を評価する
TF-IDF(Term Frequency-Inverse Document Frequency)
TF: 単語の出現頻度
IDF: 逆文書頻度
End of explanation
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
Explanation: 8.2.3 テキストデータのクレンジング
End of explanation
def tokenizer(text):
return text.split()
tokenizer('runners like running and thus they run')
# Porterステミングを行う
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer_porter('runners like running and thus they run')
# ストップワードをダウンロードする
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:] if w not in stop]
Explanation: 8.2.4 ドキュメントをトークン化する
トークン化(tokenize)
Porterステミング(Porter stemming)アルゴリズム: 単語を原形に変換する
ストップワードの除去(stop-word removal)
End of explanation
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
Explanation: 8.2.5 ドキュメントを分類するロジスティック回帰モデルのトレーニング
End of explanation
import numpy as np
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv)
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
Explanation: 8.3 さらに大規模なデータの処理: オンラインアルゴリズムとアウトオブコア学習
アウトオブコア学習(out-of-core learning)
End of explanation |
15,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing epoched data
This tutorial shows how to plot epoched data as time series, how to plot the
spectral density of epoched data, how to plot epochs as an imagemap, and how to
plot the sensor locations and projectors stored in ~mne.Epochs objects.
We'll start by importing the modules we need, loading the continuous (raw)
sample data, and cropping it to save memory
Step1: To create the ~mne.Epochs data structure, we'll extract the event
IDs stored in the
Step2: Plotting Epochs as time series
.. sidebar
Step3: To see all sensors at once, we can use butterfly mode and group by selection
Step4: Plotting projectors from an Epochs object
In the plot above we can see heartbeat artifacts in the magnetometer
channels, so before we continue let's load ECG projectors from disk and apply
them to the data
Step5: Just as we saw in the tut-section-raw-plot-proj section, we can plot
the projectors present in an ~mne.Epochs object using the same
~mne.Epochs.plot_projs_topomap method. Since the original three
empty-room magnetometer projectors were inherited from the
~mne.io.Raw file, and we added two ECG projectors for each sensor
type, we should see nine projector topomaps
Step6: Note that these field maps illustrate aspects of the signal that have
already been removed (because projectors in ~mne.io.Raw data are
applied by default when epoching, and because we called
~mne.Epochs.apply_proj after adding additional ECG projectors from
file). You can check this by examining the 'active' field of the
projectors
Step7: Plotting sensor locations
Just like ~mne.io.Raw objects, ~mne.Epochs objects
keep track of sensor locations, which can be visualized with the
~mne.Epochs.plot_sensors method
Step8: Plotting the power spectrum of Epochs
Again, just like ~mne.io.Raw objects, ~mne.Epochs objects
have a ~mne.Epochs.plot_psd method for plotting the spectral
density_ of the data.
Step9: It is also possible to plot spectral estimates across sensors as a scalp
topography, using ~mne.Epochs.plot_psd_topomap. The default parameters will
plot five frequency bands (δ, θ, α, β, γ), will compute power based on
magnetometer channels, and will plot the power estimates in decibels
Step10: Just like ~mne.Epochs.plot_projs_topomap,
~mne.Epochs.plot_psd_topomap has a vlim='joint' option for fixing
the colorbar limits jointly across all subplots, to give a better sense of
the relative magnitude in each frequency band. You can change which channel
type is used via the ch_type parameter, and if you want to view
different frequency bands than the defaults, the bands parameter takes a
list of tuples, with each tuple containing either a single frequency and a
subplot title, or lower/upper frequency limits and a subplot title
Step11: If you prefer untransformed power estimates, you can pass dB=False. It is
also possible to normalize the power estimates by dividing by the total power
across all frequencies, by passing normalize=True. See the docstring of
~mne.Epochs.plot_psd_topomap for details.
Plotting Epochs as an image map
A convenient way to visualize many epochs simultaneously is to plot them as
an image map, with each row of pixels in the image representing a single
epoch, the horizontal axis representing time, and each pixel's color
representing the signal value at that time sample for that epoch. Of course,
this requires either a separate image map for each channel, or some way of
combining information across channels. The latter is possible using the
~mne.Epochs.plot_image method; the former can be achieved with the
~mne.Epochs.plot_image method (one channel at a time) or with the
~mne.Epochs.plot_topo_image method (all sensors at once).
By default, the image map generated by ~mne.Epochs.plot_image will be
accompanied by a scalebar indicating the range of the colormap, and a time
series showing the average signal across epochs and a bootstrapped 95%
confidence band around the mean. ~mne.Epochs.plot_image is a highly
customizable method with many parameters, including customization of the
auxiliary colorbar and averaged time series subplots. See the docstrings of
~mne.Epochs.plot_image and mne.viz.plot_compare_evokeds (which is
used to plot the average time series) for full details. Here we'll show the
mean across magnetometers for all epochs with an auditory stimulus
Step12: To plot image maps for individual sensors or a small group of sensors, use
the picks parameter. Passing combine=None (the default) will yield
separate plots for each sensor in picks; passing combine='gfp' will
plot the global field power (useful for combining sensors that respond with
opposite polarity).
Step13: To plot an image map for all sensors, use
~mne.Epochs.plot_topo_image, which is optimized for plotting a large
number of image maps simultaneously, and (in interactive sessions) allows you
to click on each small image map to pop open a separate figure with the
full-sized image plot (as if you had called ~mne.Epochs.plot_image on
just that sensor). At the small scale shown in this tutorial it's hard to see
much useful detail in these plots; it's often best when plotting
interactively to maximize the topo image plots to fullscreen. The default is
a figure with black background, so here we specify a white background and
black foreground text. By default ~mne.Epochs.plot_topo_image will
show magnetometers and gradiometers on the same plot (and hence not show a
colorbar, since the sensors are on different scales) so we'll also pass a
~mne.channels.Layout restricting each plot to one channel type.
First, however, we'll also drop any epochs that have unusually high signal
levels, because they can cause the colormap limits to be too extreme and
therefore mask smaller signal fluctuations of interest.
Step14: To plot image maps for all EEG sensors, pass an EEG layout as the layout
parameter of ~mne.Epochs.plot_topo_image. Note also here the use of
the sigma parameter, which smooths each image map along the vertical
dimension (across epochs) which can make it easier to see patterns across the
small image maps (by smearing noisy epochs onto their neighbors, while
reinforcing parts of the image where adjacent epochs are similar). However,
sigma can also disguise epochs that have persistent extreme values and
maybe should have been excluded, so it should be used with caution. | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=120)
Explanation: Visualizing epoched data
This tutorial shows how to plot epoched data as time series, how to plot the
spectral density of epoched data, how to plot epochs as an imagemap, and how to
plot the sensor locations and projectors stored in ~mne.Epochs objects.
We'll start by importing the modules we need, loading the continuous (raw)
sample data, and cropping it to save memory:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'face': 5, 'button': 32}
epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, event_id=event_dict,
preload=True)
del raw
Explanation: To create the ~mne.Epochs data structure, we'll extract the event
IDs stored in the :term:stim channel, map those integer event IDs to more
descriptive condition labels using an event dictionary, and pass those to the
~mne.Epochs constructor, along with the ~mne.io.Raw data and the
desired temporal limits of our epochs, tmin and tmax (for a
detailed explanation of these steps, see tut-epochs-class).
End of explanation
catch_trials_and_buttonpresses = mne.pick_events(events, include=[5, 32])
epochs['face'].plot(events=catch_trials_and_buttonpresses, event_id=event_dict,
event_color=dict(button='red', face='blue'))
Explanation: Plotting Epochs as time series
.. sidebar:: Interactivity in pipelines and scripts
To use the interactive features of the `~mne.Epochs.plot` method
when running your code non-interactively, pass the ``block=True``
parameter, which halts the Python interpreter until the figure window is
closed. That way, any channels or epochs that you mark as "bad" will be
taken into account in subsequent processing steps.
To visualize epoched data as time series (one time series per channel), the
mne.Epochs.plot method is available. It creates an interactive window
where you can scroll through epochs and channels, enable/disable any
unapplied :term:SSP projectors <projector> to see how they affect the
signal, and even manually mark bad channels (by clicking the channel name) or
bad epochs (by clicking the data) for later dropping. Channels marked "bad"
will be shown in light grey color and will be added to
epochs.info['bads']; epochs marked as bad will be indicated as 'USER'
in epochs.drop_log.
Here we'll plot only the "catch" trials from the sample dataset
<sample-dataset>, and pass in our events array so that the button press
responses also get marked (we'll plot them in red, and plot the "face" events
defining time zero for each epoch in blue). We also need to pass in
our event_dict so that the ~mne.Epochs.plot method will know what
we mean by "button" — this is because subsetting the conditions by
calling epochs['face'] automatically purges the dropped entries from
epochs.event_id:
End of explanation
epochs['face'].plot(events=catch_trials_and_buttonpresses, event_id=event_dict,
event_color=dict(button='red', face='blue'),
group_by='selection', butterfly=True)
Explanation: To see all sensors at once, we can use butterfly mode and group by selection:
End of explanation
ecg_proj_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_ecg-proj.fif')
ecg_projs = mne.read_proj(ecg_proj_file)
epochs.add_proj(ecg_projs)
epochs.apply_proj()
Explanation: Plotting projectors from an Epochs object
In the plot above we can see heartbeat artifacts in the magnetometer
channels, so before we continue let's load ECG projectors from disk and apply
them to the data:
End of explanation
epochs.plot_projs_topomap(vlim='joint')
Explanation: Just as we saw in the tut-section-raw-plot-proj section, we can plot
the projectors present in an ~mne.Epochs object using the same
~mne.Epochs.plot_projs_topomap method. Since the original three
empty-room magnetometer projectors were inherited from the
~mne.io.Raw file, and we added two ECG projectors for each sensor
type, we should see nine projector topomaps:
End of explanation
print(all(proj['active'] for proj in epochs.info['projs']))
Explanation: Note that these field maps illustrate aspects of the signal that have
already been removed (because projectors in ~mne.io.Raw data are
applied by default when epoching, and because we called
~mne.Epochs.apply_proj after adding additional ECG projectors from
file). You can check this by examining the 'active' field of the
projectors:
End of explanation
epochs.plot_sensors(kind='3d', ch_type='all')
epochs.plot_sensors(kind='topomap', ch_type='all')
Explanation: Plotting sensor locations
Just like ~mne.io.Raw objects, ~mne.Epochs objects
keep track of sensor locations, which can be visualized with the
~mne.Epochs.plot_sensors method:
End of explanation
epochs['auditory'].plot_psd(picks='eeg')
Explanation: Plotting the power spectrum of Epochs
Again, just like ~mne.io.Raw objects, ~mne.Epochs objects
have a ~mne.Epochs.plot_psd method for plotting the spectral
density_ of the data.
End of explanation
epochs['visual/right'].plot_psd_topomap()
Explanation: It is also possible to plot spectral estimates across sensors as a scalp
topography, using ~mne.Epochs.plot_psd_topomap. The default parameters will
plot five frequency bands (δ, θ, α, β, γ), will compute power based on
magnetometer channels, and will plot the power estimates in decibels:
End of explanation
bands = [(10, '10 Hz'), (15, '15 Hz'), (20, '20 Hz'), (10, 20, '10-20 Hz')]
epochs['visual/right'].plot_psd_topomap(bands=bands, vlim='joint',
ch_type='grad')
Explanation: Just like ~mne.Epochs.plot_projs_topomap,
~mne.Epochs.plot_psd_topomap has a vlim='joint' option for fixing
the colorbar limits jointly across all subplots, to give a better sense of
the relative magnitude in each frequency band. You can change which channel
type is used via the ch_type parameter, and if you want to view
different frequency bands than the defaults, the bands parameter takes a
list of tuples, with each tuple containing either a single frequency and a
subplot title, or lower/upper frequency limits and a subplot title:
End of explanation
epochs['auditory'].plot_image(picks='mag', combine='mean')
Explanation: If you prefer untransformed power estimates, you can pass dB=False. It is
also possible to normalize the power estimates by dividing by the total power
across all frequencies, by passing normalize=True. See the docstring of
~mne.Epochs.plot_psd_topomap for details.
Plotting Epochs as an image map
A convenient way to visualize many epochs simultaneously is to plot them as
an image map, with each row of pixels in the image representing a single
epoch, the horizontal axis representing time, and each pixel's color
representing the signal value at that time sample for that epoch. Of course,
this requires either a separate image map for each channel, or some way of
combining information across channels. The latter is possible using the
~mne.Epochs.plot_image method; the former can be achieved with the
~mne.Epochs.plot_image method (one channel at a time) or with the
~mne.Epochs.plot_topo_image method (all sensors at once).
By default, the image map generated by ~mne.Epochs.plot_image will be
accompanied by a scalebar indicating the range of the colormap, and a time
series showing the average signal across epochs and a bootstrapped 95%
confidence band around the mean. ~mne.Epochs.plot_image is a highly
customizable method with many parameters, including customization of the
auxiliary colorbar and averaged time series subplots. See the docstrings of
~mne.Epochs.plot_image and mne.viz.plot_compare_evokeds (which is
used to plot the average time series) for full details. Here we'll show the
mean across magnetometers for all epochs with an auditory stimulus:
End of explanation
epochs['auditory'].plot_image(picks=['MEG 0242', 'MEG 0243'])
epochs['auditory'].plot_image(picks=['MEG 0242', 'MEG 0243'], combine='gfp')
Explanation: To plot image maps for individual sensors or a small group of sensors, use
the picks parameter. Passing combine=None (the default) will yield
separate plots for each sensor in picks; passing combine='gfp' will
plot the global field power (useful for combining sensors that respond with
opposite polarity).
End of explanation
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=150e-6) # 150 µV
epochs.drop_bad(reject=reject_criteria)
for ch_type, title in dict(mag='Magnetometers', grad='Gradiometers').items():
layout = mne.channels.find_layout(epochs.info, ch_type=ch_type)
epochs['auditory/left'].plot_topo_image(layout=layout, fig_facecolor='w',
font_color='k', title=title)
Explanation: To plot an image map for all sensors, use
~mne.Epochs.plot_topo_image, which is optimized for plotting a large
number of image maps simultaneously, and (in interactive sessions) allows you
to click on each small image map to pop open a separate figure with the
full-sized image plot (as if you had called ~mne.Epochs.plot_image on
just that sensor). At the small scale shown in this tutorial it's hard to see
much useful detail in these plots; it's often best when plotting
interactively to maximize the topo image plots to fullscreen. The default is
a figure with black background, so here we specify a white background and
black foreground text. By default ~mne.Epochs.plot_topo_image will
show magnetometers and gradiometers on the same plot (and hence not show a
colorbar, since the sensors are on different scales) so we'll also pass a
~mne.channels.Layout restricting each plot to one channel type.
First, however, we'll also drop any epochs that have unusually high signal
levels, because they can cause the colormap limits to be too extreme and
therefore mask smaller signal fluctuations of interest.
End of explanation
layout = mne.channels.find_layout(epochs.info, ch_type='eeg')
epochs['auditory/left'].plot_topo_image(layout=layout, fig_facecolor='w',
font_color='k', sigma=1)
Explanation: To plot image maps for all EEG sensors, pass an EEG layout as the layout
parameter of ~mne.Epochs.plot_topo_image. Note also here the use of
the sigma parameter, which smooths each image map along the vertical
dimension (across epochs) which can make it easier to see patterns across the
small image maps (by smearing noisy epochs onto their neighbors, while
reinforcing parts of the image where adjacent epochs are similar). However,
sigma can also disguise epochs that have persistent extreme values and
maybe should have been excluded, so it should be used with caution.
End of explanation |
15,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook applies different regression methods to the finalized dataset in search of the best regression model.
Data Ingestion and Wrangling
Step1: Lasso Regression
Lasso regression is characterized by the tendancy of the regularization to push the weights of variables to zero. This is useful for our data, because some of our features are highly correlated while others are not useful in predicting occupancy.
Step2: "A residuals plot shows the residuals on the vertical axis and the independent variable on the horizontal axis. If the points are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a non-linear model is more appropriate." (http
Step3: Gradient Boosting Regression
Step4: Random Forest Regressor | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
dataset_1min = pd.read_csv('dataset-1min.csv')
print(dataset_1min.shape)
dataset_1min.head(3)
# Delete duplicate rows in the dataset
dataset_1min = dataset_1min.drop_duplicates()
print(dataset_1min.shape)
dataset_1min.head(3)
# Subset the features needed
names = ['temperature', 'humidity', 'co2', 'light', 'noise', 'bluetooth_devices','occupancy_count']
df = dataset_1min[names]
df.head(3)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import scale
# Set occupancy_count as the dependent variable and others as independent variables
data = df.iloc[:,0:-1]
target = df.iloc[:,-1]
# Split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(data, target)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
# Standarize data
from sklearn.preprocessing import StandardScaler
standard_scaler = StandardScaler()
X_train = standard_scaler.fit_transform(X_train)
X_test = standard_scaler.transform(X_test)
Explanation: This notebook applies different regression methods to the finalized dataset in search of the best regression model.
Data Ingestion and Wrangling
End of explanation
# Select the optimal alpha using Yellowbrick
from yellowbrick.regressor import AlphaSelection
from yellowbrick.regressor import PredictionError, ResidualsPlot
from sklearn.linear_model import LassoCV
from sklearn.linear_model import Lasso
model = AlphaSelection(LassoCV())
model.fit(X_train, y_train)
model.poof()
lasso = Lasso(alpha = 0.078)
y_pred_lasso = lasso.fit(X_train, y_train).predict(X_test)
print("Test set R^2: %.4f"
% lasso.score(X_test, y_test))
print("Mean squared error: %.4f"
% np.mean((y_test - y_pred_lasso) ** 2))
# Coefficients for each feature
pd.DataFrame(lasso.coef_, names[0:6])
# Plot Regressor evaluation for Lasso
visualizer_res = ResidualsPlot(lasso)
visualizer_res.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer_res.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer_res.poof()
# Plot prediction error plot for Lasso
visualizer_pre = PredictionError(lasso)
visualizer_pre.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer_pre.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer_pre.poof()
Explanation: Lasso Regression
Lasso regression is characterized by the tendancy of the regularization to push the weights of variables to zero. This is useful for our data, because some of our features are highly correlated while others are not useful in predicting occupancy.
End of explanation
from sklearn.linear_model import ElasticNetCV
from sklearn.linear_model import ElasticNet
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from yellowbrick.regressor import AlphaSelection
from yellowbrick.regressor import PredictionError, ResidualsPlot
# Hyperparameter tuning with yellowbrick
model = AlphaSelection(ElasticNetCV())
model.fit(X_train, y_train)
model.poof()
# Fit Elastic Net Model using standarized data
elastic = ElasticNet(alpha = 0.022)
y_pred_elastic = elastic.fit(X_train, y_train).predict(X_test)
print("Test set R^2: %.4f"
% elastic.score(X_test, y_test))
print("Mean squared error: %.4f"
% np.mean((y_test - y_pred_elastic) ** 2))
# Coefficients for each feature
pd.DataFrame(elastic.coef_, names[0:6])
# Regressor evaluation using Yellowbrick
visualizer_res = ResidualsPlot(elastic)
visualizer_res.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer_res.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer_res.poof()
# Instantiate the visualizer and fit
visualizer = PredictionError(elastic)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()
Explanation: "A residuals plot shows the residuals on the vertical axis and the independent variable on the horizontal axis. If the points are randomly dispersed around the horizontal axis, a linear regression model is appropriate for the data; otherwise, a non-linear model is more appropriate." (http://www.scikit-yb.org/en/latest/examples/methods.html#regressor-evaluation)
Based on the residual plot for Lasso Model, a linear model might not be approporiate for our data.
Elastic Net
Elastic net regression is a hybrid approach that blends both penalization of the L2 and L1 norms. Specifically, elastic net regression minimizes the following...
‖y−Xβ‖+λ[(1−α)|β|22+α|β|1]
the α hyper-parameter is between 0 and 1 and controls how much L2 or L1 penalization is used (0 is ridge, 1 is lasso).
The aggressiveness of the penalty for overfitting is controlled by a parameter λ. The usual approach to optimizing the lambda hyper-parameter is through cross-validation—by minimizing the cross-validated mean squared prediction error—but in elastic net regression, the optimal lambda hyper-parameter also depends upon and is heavily dependent on the alpha hyper-parameter. (http://www.onthelambda.com/2015/08/19/kickin-it-with-elastic-net-regression/)
End of explanation
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import GridSearchCV
from sklearn import grid_search
from yellowbrick.regressor import PredictionError, ResidualsPlot
clf = GradientBoostingRegressor(learning_rate = 0.1, random_state = 0)
parameters = {'max_depth': [7,8,9],'n_estimators':[50,100,150]}
gs = grid_search.GridSearchCV(clf, parameters,cv=5)
gs.fit(X_train,y_train)
gs.grid_scores_
gs.best_params_
gbr = GradientBoostingRegressor(max_depth=9, n_estimators=150)
y_pred_gbr = gbr.fit(X_train, y_train).predict(X_test)
print("Test set R^2: %.4f"
% gbr.score(X_test, y_test))
print("Mean squared error: %.4f"
% np.mean((y_test - y_pred_gbr) ** 2))
# Plot feature importance
params = names[0:6]
feature_importance = gbr.feature_importances_
sorted_features=sorted(zip(feature_importance,params))
importances,params_sorted=zip(*sorted_features)
#plt.ylim([-1,len()])
plt.barh(range(len(params)),importances,align='center',alpha=0.6,color='g')
plt.tick_params(axis='y', which='both', labelleft='off', labelright='on')
plt.yticks(range(len(params)),params_sorted,fontsize=12)
plt.xlabel('Mean Importance',fontsize=12)
plt.title('Mean feature importances\n for gradient boosting classifier')
# Plot Regressor evaluation for GradientBoostingRegressor
visualizer_res = ResidualsPlot(gbr)
visualizer_res.fit(X_train, y_train)
visualizer_res.score(X_test, y_test)
g = visualizer_res.poof()
# Plot prediction error plot for GradientBoostingRegressor
visualizer_pre = PredictionError(gbr)
visualizer_pre.fit(X_train, y_train)
visualizer_pre.score(X_test, y_test)
g = visualizer_pre.poof()
Explanation: Gradient Boosting Regression
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
from sklearn import grid_search
from yellowbrick.regressor import PredictionError, ResidualsPlot
rf = RandomForestRegressor(max_features = 'log2')
y_pred_rf = rf.fit(X_train, y_train).predict(X_test)
print("Test set R^2: %.4f"
% rf.score(X_test, y_test))
print("Mean squared error: %.4f"
% np.mean((y_test - y_pred_rf) ** 2))
# Plot feature importance
params = names[0:6]
feature_importance = rf.feature_importances_
sorted_features=sorted(zip(feature_importance,params))
importances,params_sorted=zip(*sorted_features)
#plt.ylim([-1,len()])
plt.barh(range(len(params)),importances,align='center',alpha=0.6,color='g')
plt.tick_params(axis='y', which='both', labelleft='off', labelright='on')
plt.yticks(range(len(params)),params_sorted,fontsize=12)
plt.xlabel('Mean Importance',fontsize=12)
plt.title('Mean feature importances\n for random forest regressor')
# Plot Regressor evaluation for Lasso using Yellowbrick
visualizer_res = ResidualsPlot(rf)
visualizer_res.fit(X_train, y_train)
visualizer_res.score(X_test, y_test)
g = visualizer_res.poof()
# Plot prediction error plot for RandomForestRegressor
visualizer_pre = PredictionError(rf)
visualizer_pre.fit(X_train, y_train)
visualizer_pre.score(X_test, y_test)
g = visualizer_pre.poof()
Explanation: Random Forest Regressor
End of explanation |
15,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X[None,:], self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(error, self.weights_hidden_to_output.T)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error
aux = hidden_outputs * (1 - hidden_outputs)
hidden_error_term = hidden_error * aux
# Weight step (input to hidden)
delta_weights_i_h += np.dot(X[:,None], hidden_error_term)
# Weight step (hidden to output)
delta_weights_h_o += np.dot(hidden_outputs.T, output_error_term)
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 5000
learning_rate = 0.3
hidden_nodes = 8
output_nodes = 5
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
15,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Raw data
Step1: The visualization module (
Step2: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
scrollbar on right side of the browser window also tells us that two of the
channels are marked as bad. Bad channels are color coded gray. By
clicking the lines or channel names on the left, you can mark or unmark a bad
channel interactively. You can use +/- keys to adjust the scale (also = works
for magnifying the data). Note that the initial scaling factors can be set
with parameter scalings. If you don't know the scaling factor for
channels, you can automatically set them by passing scalings='auto'. With
pageup/pagedown and home/end keys you can adjust the amount of data
viewed at once.
Drawing annotations
You can enter annotation mode by pressing a key. In annotation mode you
can mark segments of data (and modify existing annotations) with the left
mouse button. You can use the description of any existing annotation or
create a new description by typing when the annotation dialog is active.
Notice that the description starting with the keyword 'bad' means that
the segment will be discarded when epoching the data. Existing annotations
can be deleted with the right mouse button. Annotation mode is exited by
pressing a again or closing the annotation window. See also
Step3: We can read events from a file (or extract them from the trigger channel)
and pass them as a parameter when calling the method. The events are plotted
as vertical lines so you can see how they align with the raw data.
We can also pass a corresponding "event_id" to transform the event
trigger integers to strings.
Step4: We can check where the channels reside with plot_sensors. Notice that
this method (along with many other MNE plotting functions) is callable using
any MNE data container where the channel information is available.
Step5: We used ch_groups='position' to color code the different regions. It uses
the same algorithm for dividing the regions as order='position' of
Step6: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
Step7: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See
Step8: Plotting channel-wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data. | Python Code:
import os.path as op
import numpy as np
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'),
preload=True)
raw.set_eeg_reference('average', projection=True) # set EEG average reference
Explanation: Visualize Raw data
End of explanation
raw.plot(block=True, lowpass=40)
Explanation: The visualization module (:mod:mne.viz) contains all the plotting functions
that work in combination with MNE data structures. Usually the easiest way to
use them is to call a method of the data container. All of the plotting
method names start with plot. If you're using Ipython console, you can
just write raw.plot and ask the interpreter for suggestions with a
tab key.
To visually inspect your raw data, you can use the python equivalent of
mne_browse_raw.
End of explanation
raw.plot(butterfly=True, group_by='position')
Explanation: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
scrollbar on right side of the browser window also tells us that two of the
channels are marked as bad. Bad channels are color coded gray. By
clicking the lines or channel names on the left, you can mark or unmark a bad
channel interactively. You can use +/- keys to adjust the scale (also = works
for magnifying the data). Note that the initial scaling factors can be set
with parameter scalings. If you don't know the scaling factor for
channels, you can automatically set them by passing scalings='auto'. With
pageup/pagedown and home/end keys you can adjust the amount of data
viewed at once.
Drawing annotations
You can enter annotation mode by pressing a key. In annotation mode you
can mark segments of data (and modify existing annotations) with the left
mouse button. You can use the description of any existing annotation or
create a new description by typing when the annotation dialog is active.
Notice that the description starting with the keyword 'bad' means that
the segment will be discarded when epoching the data. Existing annotations
can be deleted with the right mouse button. Annotation mode is exited by
pressing a again or closing the annotation window. See also
:class:mne.Annotations and marking_bad_segments. To see all the
interactive features, hit ? key or click help in the lower left
corner of the browser window.
<div class="alert alert-danger"><h4>Warning</h4><p>Annotations are modified in-place immediately at run-time.
Deleted annotations cannot be retrieved after deletion.</p></div>
The channels are sorted by channel type by default. You can use the
group_by parameter of :func:raw.plot <mne.io.Raw.plot> to group the
channels in a different way. group_by='selection' uses the same channel
groups as MNE-C's mne_browse_raw (see CACCJEJD). The selections are
defined in mne-python/mne/data/mne_analyze.sel and by modifying the
channels there, you can define your own selection groups. Notice that this
also affects the selections returned by :func:mne.read_selection. By
default the selections only work for Neuromag data, but
group_by='position' tries to mimic this behavior for any data with sensor
positions available. The channels are grouped by sensor positions to 8 evenly
sized regions. Notice that for this to work effectively, all the data
channels in the channel array must be present. The order parameter allows
to customize the order and select a subset of channels for plotting (picks).
Here we use the butterfly mode and group the channels by position. To toggle
between regular and butterfly modes, press 'b' key when the plotter window is
active. Notice that group_by also affects the channel groupings in
butterfly mode.
End of explanation
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
event_id = {'A/L': 1, 'A/R': 2, 'V/L': 3, 'V/R': 4, 'S': 5, 'B': 32}
raw.plot(butterfly=True, events=events, event_id=event_id)
Explanation: We can read events from a file (or extract them from the trigger channel)
and pass them as a parameter when calling the method. The events are plotted
as vertical lines so you can see how they align with the raw data.
We can also pass a corresponding "event_id" to transform the event
trigger integers to strings.
End of explanation
raw.plot_sensors(kind='3d', ch_type='mag', ch_groups='position')
Explanation: We can check where the channels reside with plot_sensors. Notice that
this method (along with many other MNE plotting functions) is callable using
any MNE data container where the channel information is available.
End of explanation
projs = mne.read_proj(op.join(data_path, 'sample_audvis_eog-proj.fif'))
raw.add_proj(projs)
raw.plot_projs_topomap()
Explanation: We used ch_groups='position' to color code the different regions. It uses
the same algorithm for dividing the regions as order='position' of
:func:raw.plot <mne.io.Raw.plot>. You can also pass a list of picks to
color any channel group with different colors.
Now let's add some ssp projectors to the raw data. Here we read them from a
file and plot them.
End of explanation
raw.plot()
Explanation: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
End of explanation
raw.plot_psd(tmax=np.inf, average=False)
Explanation: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See :func:mne.io.Raw.del_proj to actually remove the
projectors.
Raw container also lets us easily plot the power spectra over the raw data.
Here we plot the data using spatial_colors to map the line colors to
channel locations (default in versions >= 0.15.0). Other option is to use the
average (default in < 0.15.0). See the API documentation for more info.
End of explanation
layout = mne.channels.read_layout('Vectorview-mag')
layout.plot()
raw.plot_psd_topo(tmax=30., fmin=5., fmax=60., n_fft=1024, layout=layout)
Explanation: Plotting channel-wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data.
End of explanation |
15,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 3
Step1: Concept for exercise
Step2: Speed improvements
If we were to try and replicate this functionality using the pandas API, we would need to call df.applymap with our unary negation function, and subsequently df.kurtosis on the result of the first call. Let's see how this compares with our new, custom function!
Step3: Congratulations! You have just implemented new DataFrame functionality!
Consider opening a pull request | Python Code:
import modin.pandas as pd
import pandas
import numpy as np
import time
frame_data = np.random.randint(0, 100, size=(2**18, 2**8))
df = pd.DataFrame(frame_data).add_prefix("col")
pandas_df = pandas.DataFrame(frame_data).add_prefix("col")
modin_start = time.time()
print(df.mask(df < 50))
modin_end = time.time()
print("Modin mask took {} seconds.".format(round(modin_end - modin_start, 4)))
pandas_start = time.time()
print(pandas_df.mask(pandas_df < 50))
pandas_end = time.time()
print("pandas mask took {} seconds.".format(round(pandas_end - pandas_start, 4)))
Explanation: <center><h2>Scale your pandas workflows by changing one line of code</h2>
Exercise 3: Not Implemented
GOAL: Learn what happens when a function is not yet supported in Modin as well as how to extend Modin's functionality using the DataFrame Algebra.
When functionality has not yet been implemented, we default to pandas
We convert a Modin dataframe to pandas to do the operation, then convert it back once it is finished. These operations will have a high overhead due to the communication involved and will take longer than pandas.
When this is happening, a warning will be given to the user to inform them that this operation will take longer than usual. For example, DataFrame.mask is not yet implemented. In this case, when a user tries to use it, they will see this warning:
UserWarning: `DataFrame.mask` defaulting to pandas implementation.
Concept for exercise: Default to pandas
In this section of the exercise we will see first-hand how the runtime is affected by operations that are not implemented.
End of explanation
from modin.core.storage_formats.pandas.query_compiler import PandasQueryCompiler
from modin.core.dataframe.algebra import TreeReduce
PandasQueryCompiler.neg_kurtosis_custom = TreeReduce.register(lambda cell_value, **kwargs: ~cell_value,
pandas.DataFrame.kurtosis)
from pandas._libs import lib
# The function signature came from the pandas documentation:
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.kurtosis.html
def neg_kurtosis_func(self, axis=lib.no_default, skipna=True, level=None, numeric_only=None, **kwargs):
# We need to specify the axis for the query compiler
if axis in [None, lib.no_default]:
axis = 0
# The constructor allows you to pass in a query compiler as a keyword argument
# Reduce dimension is used for reduces
# We also pass all keyword arguments here to ensure correctness
return self._reduce_dimension(
self._query_compiler.neg_kurtosis_custom(
axis=axis, skipna=skipna, level=level, numeric_only=numeric_only, **kwargs
)
)
pd.DataFrame.neg_kurtosis_custom = neg_kurtosis_func
Explanation: Concept for exercise: Register custom functions
Modin's user-facing API is pandas, but it is possible that we do not yet support your favorite or most-needed functionalities. Your user-defined function may also be able to be executed more efficiently if you pre-define the type of function it is (e.g. map, reduce, etc.) using the DataFrame Algebra. To solve either case, it is possible to register a custom function to be applied to your data.
Registering a custom function for all query compilers
To register a custom function for a query compiler, we first need to import it:
python
from modin.core.storage_formats.pandas.query_compiler import PandasQueryCompiler
The PandasQueryCompiler is responsible for defining and compiling the queries that can be operated on by Modin, and is specific to the pandas storage format. Any queries defined here must also both be compatible with and result in a pandas.DataFrame. Many functionalities are very simply implemented, as you can see in the current code: Link.
If we want to register a new function, we need to understand what kind of function it is. In our example, we will try to implement a kurtosis on the unary negation of the values in the dataframe, which is a map (unargy negation of each cell) followed by a reduce. So we next want to import the function type so we can use it in our definition:
python
from modin.core.dataframe.algebra import TreeReduce
Then we can just use the TreeReduce.register classmethod and assign it to the PandasQueryCompiler:
python
PandasQueryCompiler.neg_kurtosis = TreeReduce.register(lambda cell_value, **kwargs: ~cell_value, pandas.DataFrame.kurtosis)
We include **kwargs to the lambda function since the query compiler will pass all keyword arguments to both the map and reduce functions.
Finally, we want a handle to it from the DataFrame, so we need to create a way to do that:
```python
def neg_kurtosis_func(self, kwargs):
# The constructor allows you to pass in a query compiler as a keyword argument
return self.constructor(query_compiler=self._query_compiler.neg_kurtosis(kwargs))
pd.DataFrame.neg_kurtosis_custom = neg_kurtosis_func
```
And then you can use it like you usually would:
python
df.neg_kurtosis_custom()
End of explanation
start = time.time()
print(pandas_df.applymap(lambda cell_value: ~cell_value).kurtosis())
end = time.time()
pandas_duration = end - start
print("pandas unary negation kurtosis took {} seconds.".format(pandas_duration))
start = time.time()
print(df.applymap(lambda x: ~x).kurtosis())
end = time.time()
modin_duration = end - start
print("Modin unary negation kurtosis took {} seconds.".format(modin_duration))
custom_start = time.time()
print(df.neg_kurtosis_custom())
custom_end = time.time()
modin_custom_duration = custom_end - custom_start
print("Modin neg_kurtosis_custom took {} seconds.".format(modin_custom_duration))
from IPython.display import Markdown, display
display(Markdown("### As expected, Modin is {}x faster than pandas when chaining the functions; however we see that our custom function is even faster than that - beating pandas by {}x, and Modin (when chaining the functions) by {}x!".format(round(pandas_duration / modin_duration, 2), round(pandas_duration / modin_custom_duration, 2), round(modin_duration / modin_custom_duration, 2))))
Explanation: Speed improvements
If we were to try and replicate this functionality using the pandas API, we would need to call df.applymap with our unary negation function, and subsequently df.kurtosis on the result of the first call. Let's see how this compares with our new, custom function!
End of explanation
modin_mad_custom_start = time.time()
# Implement your function here! Put the result of your custom squared `mad` in the variable `modin_mad_custom`
# Hint: Look at the kurtosis walkthrough above
modin_mad_custom = ...
print(modin_mad_custom)
modin_mad_custom_end = time.time()
# Evaluation code, do not change!
modin_mad_start = time.time()
modin_mad = df.applymap(lambda x: x**2).mad()
print(modin_mad)
modin_mad_end = time.time()
assert modin_mad_end - modin_mad_start > modin_mad_custom_end - modin_mad_custom_start, \
"Your implementation was too slow, or you used the chaining functions approach. Try again"
assert modin_mad._to_pandas().equals(modin_mad_custom._to_pandas()), "Your result did not match the result of chaining the functions, try again"
Explanation: Congratulations! You have just implemented new DataFrame functionality!
Consider opening a pull request: https://github.com/modin-project/modin/pulls
For a complete list of what is implemented, see the Supported APIs section.
Test your knowledge: Add a custom function for another tree reduce: finding DataFrame.mad after squaring all of the values
See the pandas documentation for the correct signature: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mad.html
End of explanation |
15,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calcul symbolique en Python
Step1: Introduction
Ce notebook est la traduction française du cours sur SymPy disponible entre autre sur Wakari avec quelques modifications et compléments notamment pour la résolution d'équations différentielles. Il a pour but de permettre aux étudiants de différents niveaux d'expérimenter des notions mathématiques en leur fournissant une base de code qu'ils peuvent modifier.
SymPy - est un module Python qui peut être utilisé dans un programme Python ou dans une session IPython. Il fournit de puissantes fonctionnalités de calcul symbolique.
Pour commencer à utiliser SymPy dans un programme ou un notebook Python, importer le module sympy
Step2: Pour obtenir des sorties mathématiques formatées $\LaTeX$
Step3: Variables symboliques
Dans SymPy on a besoin de créer des symboles pour les variables qu'on souhaite employer. Pour cela on utilise la class Symbol
Step4: On peut ajouter des contraintes sur les symboles lors de leur création
Step5: Nombres complexes
L'unité imaginaire est notée I dans Sympy.
Step6: Nombres rationnels
Il y a trois types numériques différents dans SymPy
Step7: Evaluation numérique
SymPy permet une précision arbitraire des évaluations numériques et fournit des expressions pour quelques constantes comme
Step8: Quand on évalue des expressions algébriques on souhaite souvent substituer un symbole par une valeur numérique. Dans SymPy cela s'effectue par la fonction subs
Step9: La fonction subs permet de substituer aussi des symboles et des expressions
Step10: On peut aussi combiner l'évolution d'expressions avec les tableaux de NumPy (pour tracer une fonction par ex)
Step11: Manipulations algébriques
Une des principales utilisations d'un système de calcul symbolique est d'effectuer des manipulations algébriques d'expression. Il est possible de développer un produit ou bien de factoriser une expression. Les fonctions pour réaliser ces opérations de bases figurent dans les exemples des sections suivantes.
Développer et factoriser
Les premiers pas dans la manipulation algébrique
Step12: La fonction expand (développer) prend des mots clés en arguments pour indiquer le type de développement à réaliser. Par exemple pour développer une expression trigonomètrique on utilise l'argument trig=True
Step13: Lancer help(expand) pour une explication détaillée des différents types de développements disponibles.
L'opération opposée au développement est bien sur la factorisation qui s'effectue grâce à la fonction factor
Step14: Simplify
The simplify tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the simplify functions also exists
Step15: simplify permet aussi de tester l'égalité d'expressions
Step16: apart and together
Pour manipuler des expressions numériques de fractions on dispose des fonctions apart and together
Step17: Simplify combine les fractions mais ne factorise pas
Step18: Calcul
En plus des manipulations algébriques, l'autre grande utilisation d'un système de calcul symbolique et d'effectuer des calculs comme des dérivées et intégrales d'expressions algébriques.
Dérivation
La dérivation est habituellement simple. On utilise la fonction diff avec pour premier argument l'expression à dériver et comme second le symbole de la variable suivant laquelle dériver
Step19: Dérivée première
Step20: Pour des dérivées d'ordre supérieur
Step21: Pour calculer la dérivée d'une expression à plusieurs variables
Step22: $\frac{d^3f}{dxdy^2}$
Step23: Integration
L'intégration est réalisée de manière similaire
Step24: En fournissant des limites pour la variable d'intégration on peut évaluer des intégrales définies
Step25: et aussi des intégrales impropres pour lesquelles on ne connait pas de primitive
Step26: Rappel, oo est la notation SymPy pour l'infini.
Sommes et produits
On peut évaluer les sommes et produits d'expression avec les fonctions Sum et Product
Step27: Les produits sont calculés de manière très semblables
Step28: Limites
Les limites sont évaluées par la fonction limit. Par exemple
Step29: On peut changer la direction d'approche du point limite par l'argument du mot clé dir
Step30: Séries
Le développement en série est une autre fonctionnalité très utile d'un système de calcul symbolique. Dans SymPy on réalise les développements en série grâce à la fonction series
Step31: Par défaut le développement de l'expression s'effectue au voisinage de $x=0$, mais on peut développer la série au voisinage de toute autre valeur de $x$ en incluant explicitement cette valeur lors de l'appel à la fonction
Step32: Et on peut explicitement définir jusqu'à quel ordre le développement doit être mené
Step33: Le développement en séries inclue l'ordre d'approximation. Ceci permet de gérer l'ordre du résultat de calculs utilisant des développements en séries d'ordres différents
Step34: Si on ne souhaite pas afficher l'ordre on utilise la méthode removeO
Step35: Mais cela conduit à des résultats incorrects pour des calculs avec plusieurs développements
Step36: Plus sur les séries
https
Step37: Avec les instances de la classe Matrix on peut faire les opérations algébriques classiques
Step38: Et calculer les déterminants et inverses
Step39: Résolution d'équations
Pour résoudre des équations et des systèmes d'équations on utilise la fonction solve
Step40: Système d'équations
Step41: En termes d'autres expressions symboliques
Step42: Résolution d'équations différentielles
Pour résoudre des équations diférentielles et des systèmes d'équations différentielles on utilise la fonction dsolve
Step43: Exemple d'équation différentielle du 2e ordre
Step44: SymPy ne sait pas résoudre cette equation différentielle non linéaire avec $h(x)^2$
Step45: On peut résoudre cette équation différentielle avec une méthode numérique fournie par la fonction odeint de SciPy | Python Code:
%matplotlib inline
Explanation: Calcul symbolique en Python
End of explanation
from sympy import *
Explanation: Introduction
Ce notebook est la traduction française du cours sur SymPy disponible entre autre sur Wakari avec quelques modifications et compléments notamment pour la résolution d'équations différentielles. Il a pour but de permettre aux étudiants de différents niveaux d'expérimenter des notions mathématiques en leur fournissant une base de code qu'ils peuvent modifier.
SymPy - est un module Python qui peut être utilisé dans un programme Python ou dans une session IPython. Il fournit de puissantes fonctionnalités de calcul symbolique.
Pour commencer à utiliser SymPy dans un programme ou un notebook Python, importer le module sympy:
End of explanation
from sympy import init_printing
init_printing(use_latex=True)
Explanation: Pour obtenir des sorties mathématiques formatées $\LaTeX$ :
End of explanation
x = Symbol('x')
(pi + x)**2
# manière alternative de définir plusieurs symboles en une seule instruction
a, b, c = symbols("a, b, c")
Explanation: Variables symboliques
Dans SymPy on a besoin de créer des symboles pour les variables qu'on souhaite employer. Pour cela on utilise la class Symbol:
End of explanation
x = Symbol('x', real=True)
x.is_imaginary
x = Symbol('x', positive=True)
x > 0
Explanation: On peut ajouter des contraintes sur les symboles lors de leur création :
End of explanation
1+1*I
I**2
(1 + x * I)**2
Explanation: Nombres complexes
L'unité imaginaire est notée I dans Sympy.
End of explanation
r1 = Rational(4,5)
r2 = Rational(5,4)
r1
r1+r2
r1/r2
Explanation: Nombres rationnels
Il y a trois types numériques différents dans SymPy : Real (réel), Rational (rationnel), Integer (entier) :
End of explanation
pi.evalf(n=50)
E.evalf(n=4)
y = (x + pi)**2
N(y, 5) # raccourci pour evalf
Explanation: Evaluation numérique
SymPy permet une précision arbitraire des évaluations numériques et fournit des expressions pour quelques constantes comme : pi, E, oo pour l'infini.
Pour évaluer numériquement une expression nous utilisons la fonction evalf (ou N). Elle prend un argument n qui spécifie le nombre de chiffres significatifs.
End of explanation
y.subs(x, 1.5)
N(y.subs(x, 1.5))
Explanation: Quand on évalue des expressions algébriques on souhaite souvent substituer un symbole par une valeur numérique. Dans SymPy cela s'effectue par la fonction subs :
End of explanation
y.subs(x, a+pi)
Explanation: La fonction subs permet de substituer aussi des symboles et des expressions :
End of explanation
import numpy
x_vec = numpy.arange(0, 10, 0.1)
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x_vec, y_vec);
Explanation: On peut aussi combiner l'évolution d'expressions avec les tableaux de NumPy (pour tracer une fonction par ex) :
End of explanation
(x+1)*(x+2)*(x+3)
expand((x+1)*(x+2)*(x+3))
Explanation: Manipulations algébriques
Une des principales utilisations d'un système de calcul symbolique est d'effectuer des manipulations algébriques d'expression. Il est possible de développer un produit ou bien de factoriser une expression. Les fonctions pour réaliser ces opérations de bases figurent dans les exemples des sections suivantes.
Développer et factoriser
Les premiers pas dans la manipulation algébrique
End of explanation
sin(a+b)
expand(sin(a+b), trig=True)
sin(a+b)**3
expand(sin(a+b)**3, trig=True)
Explanation: La fonction expand (développer) prend des mots clés en arguments pour indiquer le type de développement à réaliser. Par exemple pour développer une expression trigonomètrique on utilise l'argument trig=True :
End of explanation
factor(x**3 + 6 * x**2 + 11*x + 6)
x1, x2 = symbols("x1, x2")
factor(x1**2*x2 + 3*x1*x2 + x1*x2**2)
Explanation: Lancer help(expand) pour une explication détaillée des différents types de développements disponibles.
L'opération opposée au développement est bien sur la factorisation qui s'effectue grâce à la fonction factor :
End of explanation
# simplify expands a product
simplify((x+1)*(x+2)*(x+3))
# simplify uses trigonometric identities
simplify(sin(a)**2 + cos(a)**2)
simplify(cos(x)/sin(x))
Explanation: Simplify
The simplify tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the simplify functions also exists: trigsimp, powsimp, logcombine, etc.
The basic usages of these functions are as follows:
End of explanation
exp1 = sin(a+b)**3
exp2 = sin(a)**3*cos(b)**3 + 3*sin(a)**2*sin(b)*cos(a)*cos(b)**2 + 3*sin(a)*sin(b)**2*cos(a)**2*cos(b) + sin(b)**3*cos(a)**3
simplify(exp1 - exp2)
if simplify(exp1 - exp2) == 0:
print "{0} = {1}".format(exp1, exp2)
else:
print "exp1 et exp2 sont différentes"
Explanation: simplify permet aussi de tester l'égalité d'expressions :
End of explanation
f1 = 1/((a+1)*(a+2))
f1
apart(f1)
f2 = 1/(a+2) + 1/(a+3)
f2
together(f2)
Explanation: apart and together
Pour manipuler des expressions numériques de fractions on dispose des fonctions apart and together :
End of explanation
simplify(f2)
Explanation: Simplify combine les fractions mais ne factorise pas :
End of explanation
y
Explanation: Calcul
En plus des manipulations algébriques, l'autre grande utilisation d'un système de calcul symbolique et d'effectuer des calculs comme des dérivées et intégrales d'expressions algébriques.
Dérivation
La dérivation est habituellement simple. On utilise la fonction diff avec pour premier argument l'expression à dériver et comme second le symbole de la variable suivant laquelle dériver :
End of explanation
diff(y**2, x)
Explanation: Dérivée première
End of explanation
diff(y**2, x, x) # dérivée seconde
diff(y**2, x, 2) # dérivée seconde avec une autre syntaxe
Explanation: Pour des dérivées d'ordre supérieur :
End of explanation
x, y, z = symbols("x,y,z")
f = sin(x*y) + cos(y*z)
Explanation: Pour calculer la dérivée d'une expression à plusieurs variables :
End of explanation
diff(f, x, 1, y, 2)
Explanation: $\frac{d^3f}{dxdy^2}$
End of explanation
f
integrate(f, x)
Explanation: Integration
L'intégration est réalisée de manière similaire :
End of explanation
integrate(f, (x, -1, 1))
Explanation: En fournissant des limites pour la variable d'intégration on peut évaluer des intégrales définies :
End of explanation
x_i = numpy.arange(-5, 5, 0.1)
y_i = numpy.array([N((exp(-x**2)).subs(x, xx)) for xx in x_i])
fig2, ax2 = plt.subplots()
ax2.plot(x_i, y_i)
ax2.set_title("$e^{-x^2}$")
integrate(exp(-x**2), (x, -oo, oo))
Explanation: et aussi des intégrales impropres pour lesquelles on ne connait pas de primitive
End of explanation
n = Symbol("n")
Sum(1/n**2, (n, 1, 10))
Sum(1/n**2, (n,1, 10)).evalf()
Sum(1/n**2, (n, 1, oo)).evalf()
N(pi**2/6) # fonction zeta(2) de Riemann
Explanation: Rappel, oo est la notation SymPy pour l'infini.
Sommes et produits
On peut évaluer les sommes et produits d'expression avec les fonctions Sum et Product :
End of explanation
Product(n, (n, 1, 10)) # 10!
Explanation: Les produits sont calculés de manière très semblables :
End of explanation
limit(sin(x)/x, x, 0)
Explanation: Limites
Les limites sont évaluées par la fonction limit. Par exemple :
End of explanation
limit(1/x, x, 0, dir="+")
limit(1/x, x, 0, dir="-")
Explanation: On peut changer la direction d'approche du point limite par l'argument du mot clé dir :
End of explanation
series(exp(x), x)
Explanation: Séries
Le développement en série est une autre fonctionnalité très utile d'un système de calcul symbolique. Dans SymPy on réalise les développements en série grâce à la fonction series :
End of explanation
series(exp(x), x, 1)
Explanation: Par défaut le développement de l'expression s'effectue au voisinage de $x=0$, mais on peut développer la série au voisinage de toute autre valeur de $x$ en incluant explicitement cette valeur lors de l'appel à la fonction :
End of explanation
series(exp(x), x, 1, 10)
Explanation: Et on peut explicitement définir jusqu'à quel ordre le développement doit être mené :
End of explanation
s1 = cos(x).series(x, 0, 5)
s1
s2 = sin(x).series(x, 0, 2)
s2
expand(s1 * s2)
Explanation: Le développement en séries inclue l'ordre d'approximation. Ceci permet de gérer l'ordre du résultat de calculs utilisant des développements en séries d'ordres différents :
End of explanation
expand(s1.removeO() * s2.removeO())
Explanation: Si on ne souhaite pas afficher l'ordre on utilise la méthode removeO :
End of explanation
(cos(x)*sin(x)).series(x, 0, 6)
Explanation: Mais cela conduit à des résultats incorrects pour des calculs avec plusieurs développements :
End of explanation
m11, m12, m21, m22 = symbols("m11, m12, m21, m22")
b1, b2 = symbols("b1, b2")
A = Matrix([[m11, m12],[m21, m22]])
A
b = Matrix([[b1], [b2]])
b
Explanation: Plus sur les séries
https://fr.wikipedia.org/wiki/D%C3%A9veloppement_limit%C3%A9 - Article de Wikipedia.
Algèbre linéaire
Matrices
Les matrices sont définies par la classe Matrix :
End of explanation
A**2
A * b
Explanation: Avec les instances de la classe Matrix on peut faire les opérations algébriques classiques :
End of explanation
A.det()
A.inv()
Explanation: Et calculer les déterminants et inverses :
End of explanation
solve(x**2 - 1, x)
solve(x**4 - x**2 - 1, x)
expand((x-1)*(x-2)*(x-3)*(x-4)*(x-5))
solve(x**5 - 15*x**4 + 85*x**3 - 225*x**2 + 274*x - 120, x)
Explanation: Résolution d'équations
Pour résoudre des équations et des systèmes d'équations on utilise la fonction solve :
End of explanation
solve([x + y - 1, x - y - 1], [x,y])
Explanation: Système d'équations :
End of explanation
solve([x + y - a, x - y - c], [x,y])
Explanation: En termes d'autres expressions symboliques :
End of explanation
from sympy import Function, dsolve, Eq, Derivative, sin, cos, symbols
from sympy.abc import x
Explanation: Résolution d'équations différentielles
Pour résoudre des équations diférentielles et des systèmes d'équations différentielles on utilise la fonction dsolve :
End of explanation
f = Function('f')
dsolve(Derivative(f(x), x, x) + 9*f(x), f(x))
dsolve(diff(f(x), x, 2) + 9*f(x), f(x), hint='default', ics={f(0):0, f(1):10})
# Essai de récupération de la valeur de la constante C1 quand une condition initiale est fournie
eqg = Symbol("eqg")
g = Function('g')
eqg = dsolve(Derivative(g(x), x) + g(x), g(x), ics={g(2): 50})
eqg
print "g(x) est de la forme {}".format(eqg.rhs)
# recherche manuelle de la valeur de c1 qui vérifie la condition initiale
c1 = Symbol("c1")
c1 = solve(Eq(c1*E**(-2),50), c1)
print c1
Explanation: Exemple d'équation différentielle du 2e ordre
End of explanation
h = Function('h')
try:
dsolve(Derivative(h(x), x) + 0.001*h(x)**2 - 10, h(x))
except:
print "une erreur s'est produite"
Explanation: SymPy ne sait pas résoudre cette equation différentielle non linéaire avec $h(x)^2$ :
End of explanation
from scipy.integrate import odeint
def dv_dt(vec, t, k, m, g):
z, v = vec[0], vec[1]
dz = -v
dv = -k/m*v**2 + g
return [dz, dv]
vec0 = [0, 0] # conditions initiales [altitude, vitesse]
t_si = numpy.linspace (0, 30 ,150) # de 0 à 30 s, 150 points
k = 0.1 # coefficient aérodynamique
m = 80 # masse (kg)
g = 9.81 # accélération pesanteur (m/s/s)
v_si = odeint(dv_dt, vec0, t_si, args=(k, m, g))
print "vitesse finale : {0:.1f} m/s soit {1:.0f} km/h".format(v_si[-1, 1], v_si[-1, 1] * 3.6)
fig_si, ax_si = plt.subplots()
ax_si.set_title("Vitesse en chute libre")
ax_si.set_xlabel("s")
ax_si.set_ylabel("m/s")
ax_si.plot(t_si, v_si[:,1], 'b')
Explanation: On peut résoudre cette équation différentielle avec une méthode numérique fournie par la fonction odeint de SciPy :
Méthode numérique pour équations différentielles (non SymPy)
End of explanation |
15,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recommendation Engine
In this tutorial we are going to build a simple recommender system using collaborative filtering. You'll be learning about the popular data analysis package pandas along the way.
1. The import statements
Step1: 2. The data
We will use Germany's data of the Last.fm Dataset. To read and explore the data we will use the pandas library
Step2: The resulting DataFrame contains a row for each user and each column represents an artist. The values indicate whether the user listend to a song by that artist (1) or not (0). Note that the number of times a person listened to a specific artist is not listed.
3. Determining artist similarity
We want to figure out which artist to recommend to which user. Since we know which user listened to which artists we can look for artists or users that are similar. Humans can have vastly complex listening preferences and are very hard to group. Artists on the other hand are usually much easier to group. So it is best to look for similarities between artists rather than between users.
To determine if two artists are similar, you can use many different similarity metrics. Finding the best metric is a whole research topic on its own. In many cases though, the cosine similarity is used. The implementation we will use here is the sklearn.metrics.pairwise.cosine_similarity.
This function will create a matrix of similarity scores between elements in the first dimension of the input. In our dataset the first dimension holds the different users and the second the different artists. You can switch these dimensions with np.transpose().
Step3: The cosine_similarity function returned a 2-dimensional numpy array. This array contains all the similarity values we need, but it is not labelled. Since the entire array will not fit the screen, we will use slicing to print a subset of the result.
Step4: The artist names are both the row and column labels for the similarity_matrix. We can add these labels by creating a new DataFrame based on the numpy array. By using the pandas.DataFrame.iloc integer-location based indexer, we get the same slice as above, but with added labels.
Step5: Pandas also provides a label based indexer, pandas.DataFrame.loc, which we can use to get a slice based on label values.
Step6: As you can see above, bands are 100% similar to themselves and The White Stripes are nothing like Abba.
We can further increase the usability of this data by making it a tidy dataset. This means we'll put each variable in a column, and each observation in a row. There's three variables in our dataset
Step7: To view the first n rows, we can use the pandas.DataFrame.head method, the default value for n is 5.
Step8: Note that we created a MultiIndex by specifying two columns in the set_index call.
Step9: The use of the MultiIndex enables flexible access to the data. If we index with a single artist name, we get all compared artists. To view the last n rows for this result, we can use the pandas.DataFrame.tail method.
Step10: We can index on multiple levels by providing a tuple of indexes
Step11: 4. Picking the best matches
Even though many of the artists above have a similarity close to 0, there might be some artists that seem to be slightly similar because somebody with a complex taste listened to them both. To remove this noise from the dataset we are going to limit the number of matches.
Let's first try this with the first artist in the list
Step13: We can transform the task of getting the most similar bands for a given band to a function.
Step14: Note that we also defined a docstring for this function, which we can view by using help() or shift + tab in a jupyter notebook.
Step15: 5. Get the listening history
To determine the recommendation score for an artist, we'll want to know whether a user listened to many similar artists. We know which artists are similar to a given artist, but we still need to figure out if any of these similar artists are in the listening history of the user. The listening history of a single user can be acquired by entering the user id with the .loc indexer.
Step16: We now have the complete listening history, but we only need the history for the similar artists. For this we can use the index labels from the DataFrame returned by the most_similar_artists function. Index labels for a DataFrame can be retrieved by using the pandas.DataFrame.index attribute.
Step17: We can combine the user id and similar labels in the .loc indexer to get the listening history for the most similar artists.
Step19: Let's make a function to get the most similar artists and their listening history for a given artist and user. The function creates two DataFrames with the same index, and then uses pandas.concat to create a single DataFrame from them.
Step20: 6. Calculate the recommendation score.
Now that we have the most_similar_artists_history function, we can start to figure out which artists to advise to whom. We want to quantify how the listening history of a user matches artists similar to an artist they didn't listen to yet. For this purpose we will use the following recommendation score
Step21: Remember what the DataFrame returned by the most_similar_artists_history function looks like
Step22: Pandas provides methods to do column or row aggregation, like e.g. pandas.DataFrame.product. This method will calculate all values in a column or row. The direction can be chosen with the axis parameter. As we need the product of the values in the rows (similarity * history), we will need to specify axis=1.
Step23: Then there's pandas.DataFrame.sum which does the same thing for summing the values. As we want the sum for all values in the column we would have to specify axis=0. Since 0 is the default value for the axis parameter we don't have to add it to the method call.
Step25: Knowing these methods, it is only a small step to define the scoring function based on the output of most_similar_artists_history.
Step27: Determine artists to recommend
We only want to recommend artists the user didn't listen to yet, which we'll determine by using the listening history.
Step29: The last requirement for our recommender engine is a function that can score all unknown artists for a given user. We will make this function return a list of dictionaries, which can be easily converted to a DataFrame later on. The list will be generated using a list comprehension.
Step31: From the scored artists we can easily derive the best recommendations for a given user.
Step32: With this final function, it is a small step to get recommendations for multiple users. As our code hasn't been optimized for performance, it is advised to limit the number of users somewhat.
Step33: We can now use the concat function again to get a nice overview of the recommended artists. | Python Code:
import numpy as np
import pandas as pd
import sklearn.metrics.pairwise
Explanation: Recommendation Engine
In this tutorial we are going to build a simple recommender system using collaborative filtering. You'll be learning about the popular data analysis package pandas along the way.
1. The import statements
End of explanation
data = pd.read_csv('data/lastfm-matrix-germany.csv').set_index('user')
data.head()
data.shape
Explanation: 2. The data
We will use Germany's data of the Last.fm Dataset. To read and explore the data we will use the pandas library:
+ pandas.read_csv: reads a csv file and returns a pandas.DataFrame, a two-dimensional data structure with labelled rows and columns.
+ pandas.DataFrame.set_index: sets the DataFrame index (the row labels).
Pandas enables the use of method chaining: read_csv call returns a DataFrame, on which we can immediatly apply the set_index method by chaining it via dot notation.
End of explanation
### BEGIN SOLUTION
similarity_matrix = sklearn.metrics.pairwise.cosine_similarity(np.transpose(data))
### END SOLUTION
# similarity_matrix = sklearn.metrics.pairwise.cosine_similarity( ? )
assert similarity_matrix.shape == (285, 285)
print(similarity_matrix.ndim)
Explanation: The resulting DataFrame contains a row for each user and each column represents an artist. The values indicate whether the user listend to a song by that artist (1) or not (0). Note that the number of times a person listened to a specific artist is not listed.
3. Determining artist similarity
We want to figure out which artist to recommend to which user. Since we know which user listened to which artists we can look for artists or users that are similar. Humans can have vastly complex listening preferences and are very hard to group. Artists on the other hand are usually much easier to group. So it is best to look for similarities between artists rather than between users.
To determine if two artists are similar, you can use many different similarity metrics. Finding the best metric is a whole research topic on its own. In many cases though, the cosine similarity is used. The implementation we will use here is the sklearn.metrics.pairwise.cosine_similarity.
This function will create a matrix of similarity scores between elements in the first dimension of the input. In our dataset the first dimension holds the different users and the second the different artists. You can switch these dimensions with np.transpose().
End of explanation
similarity_matrix[:5, :5]
Explanation: The cosine_similarity function returned a 2-dimensional numpy array. This array contains all the similarity values we need, but it is not labelled. Since the entire array will not fit the screen, we will use slicing to print a subset of the result.
End of explanation
### BEGIN SOLUTION
artist_similarities = pd.DataFrame(similarity_matrix, index=data.columns, columns=data.columns)
### END SOLUTION
# artist_similarities = pd.DataFrame( ? , index=data.columns, columns= ? )
assert np.array_equal(artist_similarities.columns, data.columns)
assert artist_similarities.shape == similarity_matrix.shape
artist_similarities.iloc[:5, :5]
Explanation: The artist names are both the row and column labels for the similarity_matrix. We can add these labels by creating a new DataFrame based on the numpy array. By using the pandas.DataFrame.iloc integer-location based indexer, we get the same slice as above, but with added labels.
End of explanation
slice_artists = ['ac/dc', 'madonna', 'metallica', 'rihanna', 'the white stripes']
artist_similarities.loc[slice_artists, slice_artists]
Explanation: Pandas also provides a label based indexer, pandas.DataFrame.loc, which we can use to get a slice based on label values.
End of explanation
similarities = (
# start from untidy DataFrame
artist_similarities
# add a name to the index
.rename_axis(index='artist')
# artist needs to be a column for melt
.reset_index()
# create the tidy dataset
.melt(id_vars='artist', var_name='compared_with', value_name='cosine_similarity')
# artist compared with itself not needed, keep rows where artist and compared_with are not equal.
.query('artist != compared_with')
# set identifying observations to index
.set_index(['artist', 'compared_with'])
# sort the index
.sort_index()
)
Explanation: As you can see above, bands are 100% similar to themselves and The White Stripes are nothing like Abba.
We can further increase the usability of this data by making it a tidy dataset. This means we'll put each variable in a column, and each observation in a row. There's three variables in our dataset:
+ first artist
+ second artist
+ cosine similarity
In our current DataFrame the second artist is determined by the column labels, and as consequence the cosine similarity observation is spread over multiple columns. The pandas.DataFrame.melt method will fix this. We make extensive use of method chaining for this reshaping of the DataFrame. If you want to know the effect of the different methods, you can comment / uncomment them and check the influence on the result.
End of explanation
similarities.head()
Explanation: To view the first n rows, we can use the pandas.DataFrame.head method, the default value for n is 5.
End of explanation
similarities.index
Explanation: Note that we created a MultiIndex by specifying two columns in the set_index call.
End of explanation
similarities.loc['the beatles', :].tail()
Explanation: The use of the MultiIndex enables flexible access to the data. If we index with a single artist name, we get all compared artists. To view the last n rows for this result, we can use the pandas.DataFrame.tail method.
End of explanation
similarities.loc[('abba', 'madonna'), :]
print(slice_artists)
similarities.loc[('abba', slice_artists), :]
Explanation: We can index on multiple levels by providing a tuple of indexes:
End of explanation
artist = 'a perfect circle'
n_artists = 10
### BEGIN SOLUTION
top_n = similarities.loc[artist, :].sort_values('cosine_similarity').tail(n_artists)
### END SOLUTION
# top_n = similarities.loc[?, :].sort_values('cosine_similarity') ?
print(top_n)
assert len(top_n) == 10
assert type(top_n) == pd.DataFrame
Explanation: 4. Picking the best matches
Even though many of the artists above have a similarity close to 0, there might be some artists that seem to be slightly similar because somebody with a complex taste listened to them both. To remove this noise from the dataset we are going to limit the number of matches.
Let's first try this with the first artist in the list: a perfect circle.
End of explanation
def most_similar_artists(artist, n_artists=10):
Get the most similar artists for a given artist.
Parameters
----------
artist: str
The artist for which to get similar artists
n_artists: int, optional
The number of similar artists to return
Returns
-------
pandas.DataFrame
A DataFrame with the similar artists and their cosine_similarity to
the given artist
### BEGIN SOLUTION
return similarities.loc[artist, :].sort_values('cosine_similarity').tail(n_artists)
### END SOLUTION
# return similarities.loc[ ? ].sort_values( ? ) ?
print(most_similar_artists('a perfect circle'))
assert top_n.equals(most_similar_artists('a perfect circle'))
assert most_similar_artists('abba', n_artists=15).shape == (15, 1)
Explanation: We can transform the task of getting the most similar bands for a given band to a function.
End of explanation
help(most_similar_artists)
Explanation: Note that we also defined a docstring for this function, which we can view by using help() or shift + tab in a jupyter notebook.
End of explanation
user_id = 42
### BEGIN SOLUTION
user_history = data.loc[user_id, :]
### END SOLUTION
# user_history = data.loc[ ? , ?]
print(user_history)
assert user_history.name == user_id
assert len(user_history) == 285
Explanation: 5. Get the listening history
To determine the recommendation score for an artist, we'll want to know whether a user listened to many similar artists. We know which artists are similar to a given artist, but we still need to figure out if any of these similar artists are in the listening history of the user. The listening history of a single user can be acquired by entering the user id with the .loc indexer.
End of explanation
artist = 'the beatles'
### BEGIN SOLUTION
similar_labels = most_similar_artists(artist).index
### END SOLUTION
# similar_labels = most_similar_artists( ? ). ?
print(similar_labels)
assert len(similar_labels) == 10
assert type(similar_labels) == pd.Index
Explanation: We now have the complete listening history, but we only need the history for the similar artists. For this we can use the index labels from the DataFrame returned by the most_similar_artists function. Index labels for a DataFrame can be retrieved by using the pandas.DataFrame.index attribute.
End of explanation
user_id = 42
### BEGIN SOLUTION
similar_history = data.loc[user_id, similar_labels]
### END SOLUTION
# similar_history = data.loc[?, ?]
assert similar_history.name == user_id
print(similar_history)
Explanation: We can combine the user id and similar labels in the .loc indexer to get the listening history for the most similar artists.
End of explanation
def most_similar_artists_history(artist, user_id):
Get most similar artists and their listening history.
Parameters
----------
artist: str
The artist for which to get the most similar bands
user_id: int
The user for which to get the listening history
Returns
-------
pandas.DataFrame
A DataFrame containing the most similar artists for the given artist,
with their cosine similarities and their listening history status for
the given user.
### BEGIN SOLUTION
artists = most_similar_artists(artist)
history = data.loc[user_id, artists.index].rename('listening_history')
### END SOLUTION
# artists = most_similar_artists( ? )
# history = data.loc[ ? , ? ].rename('listening_history')
return pd.concat([artists, history], axis=1)
example = most_similar_artists_history('abba', 42)
assert example.columns.to_list() == ['cosine_similarity', 'listening_history']
example
Explanation: Let's make a function to get the most similar artists and their listening history for a given artist and user. The function creates two DataFrames with the same index, and then uses pandas.concat to create a single DataFrame from them.
End of explanation
listening_history = np.array([0, 1, 0])
similarity_scores = np.array([0.3, 0.2, 0.1])
recommendation_score = sum(listening_history * similarity_scores) / sum(similarity_scores)
print(f'{recommendation_score:.3f}')
Explanation: 6. Calculate the recommendation score.
Now that we have the most_similar_artists_history function, we can start to figure out which artists to advise to whom. We want to quantify how the listening history of a user matches artists similar to an artist they didn't listen to yet. For this purpose we will use the following recommendation score:
+ We start with the similar artists for a given artist, and their listening history for the user.
+ Then we sum the cosine similarities of artists the user listened to.
+ In the end we divide by the total sum of similarities to normalize the score.
So when a user listened to 1 of 3 artists that are similar, for example [0, 1, 0] and their respective similarity scores are [0.3, 0.2, 0.1] you get the following recommendation score:
End of explanation
user_id = 42
artist = 'abba'
most_similar_artists_history(artist, user_id)
Explanation: Remember what the DataFrame returned by the most_similar_artists_history function looks like:
End of explanation
most_similar_artists_history(artist, user_id).product(axis=1)
Explanation: Pandas provides methods to do column or row aggregation, like e.g. pandas.DataFrame.product. This method will calculate all values in a column or row. The direction can be chosen with the axis parameter. As we need the product of the values in the rows (similarity * history), we will need to specify axis=1.
End of explanation
most_similar_artists_history(artist, user_id).product(axis=1).sum()
Explanation: Then there's pandas.DataFrame.sum which does the same thing for summing the values. As we want the sum for all values in the column we would have to specify axis=0. Since 0 is the default value for the axis parameter we don't have to add it to the method call.
End of explanation
def recommendation_score(artist, user_id):
Calculate recommendation score.
Parameters
----------
artist: str
The artist for which to calculate the recommendation score.
user_id: int
The user for which to calculate the recommendation score.
Returns:
float
Recommendation score
df = most_similar_artists_history(artist, user_id)
### BEGIN SOLUTION
return df.product(axis=1).sum() / df.loc[:, 'cosine_similarity'].sum()
### END SOLUTION
# return df.?(axis=1).?() / df.loc[:, ? ].sum()
assert np.allclose(recommendation_score('abba', 42), 0.08976655361839528)
assert np.allclose(recommendation_score('the white stripes', 1), 0.09492796371597861)
recommendation_score('abba', 42)
Explanation: Knowing these methods, it is only a small step to define the scoring function based on the output of most_similar_artists_history.
End of explanation
def unknown_artists(user_id):
Get artists the user hasn't listened to.
Parameters
----------
user_id: int
User for which to get unknown artists
Returns
-------
pandas.Index
Collection of artists the user hasn't listened to.
### BEGIN SOLUTION
history = data.loc[user_id, :]
return history.loc[history == 0].index
### END SOLUTION
# history = data.loc[ ? , :]
# return history.loc[ ? == 0].index
print(unknown_artists(42))
assert len(unknown_artists(42)) == 278
assert type(unknown_artists(42)) == pd.Index
Explanation: Determine artists to recommend
We only want to recommend artists the user didn't listen to yet, which we'll determine by using the listening history.
End of explanation
def score_unknown_artists(user_id):
Score all unknown artists for a given user.
Parameters
----------
user_id: int
User for which to get unknown artists
Returns
-------
list of dict
A list of dictionaries.
### BEGIN SOLUTION
artists = unknown_artists(user_id)
return [{'recommendation': artist, 'score': recommendation_score(artist, user_id)} for artist in artists]
### END SOLUTION
# artists = unknown_artists( ? )
# return [{'recommendation': artist, 'score': recommendation_score( ? , user_id)} for artist in ?]
assert np.allclose(score_unknown_artists(42)[1]['score'], 0.08976655361839528)
assert np.allclose(score_unknown_artists(313)[137]['score'], 0.20616395469219984)
score_unknown_artists(42)[:5]
Explanation: The last requirement for our recommender engine is a function that can score all unknown artists for a given user. We will make this function return a list of dictionaries, which can be easily converted to a DataFrame later on. The list will be generated using a list comprehension.
End of explanation
def user_recommendations(user_id, n_rec=5):
Recommend new artists for a user.
Parameters
----------
user_id: int
User for which to get recommended artists
n_rec: int, optional
Number of recommendations to make
Returns
-------
pandas.DataFrame
A DataFrame containing artist recommendations for the given user,
with their recommendation score.
scores = score_unknown_artists(user_id)
### BEGIN SOLUTION
return (
pd.DataFrame(scores)
.sort_values('score', ascending=False)
.head(n_rec)
.reset_index(drop=True)
)
### END SOLUTION
# return (
# pd.DataFrame( ? )
# .sort_values( ? , ascending=False)
# . ? (n_rec)
# .reset_index(drop=True)
# )
assert user_recommendations(313).loc[4, 'recommendation'] == 'jose gonzalez'
assert len(user_recommendations(1, n_rec=10)) == 10
user_recommendations(642)
Explanation: From the scored artists we can easily derive the best recommendations for a given user.
End of explanation
recommendations = [user_recommendations(user).loc[:, 'recommendation'].rename(user) for user in data.index[:10]]
Explanation: With this final function, it is a small step to get recommendations for multiple users. As our code hasn't been optimized for performance, it is advised to limit the number of users somewhat.
End of explanation
np.transpose(pd.concat(recommendations, axis=1))
g_s = most_similar_artists_history('gorillaz', 642).assign(sim2 = lambda x: x.product(axis=1))
r_1 = g_s.sim2.sum()
total = g_s.cosine_similarity.sum()
print(total)
r_1/total
g_s
Explanation: We can now use the concat function again to get a nice overview of the recommended artists.
End of explanation |
15,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
San Francisco Crime Modeling
Go here for the details on the Kaggle competition
Predictive Goal
Step1: Load the dataset from the prepared Parquet file
Step2: Step 1
Step3: Thus, our machine learning results must be better than guessing that every category is LARCENY/THEFT.
Todo
Step4: Step 2
Step5: What do we do about the "Dates" datetime column ? ...
Since we know from the data profiling results that the time span of the data is over 12 years, let's start with converting the Dates column to an ordinal (an integer value representing the number of days since year 1 day 1) and including with the VectorAssembler. After that we'll try transforming the datetime value to year, month, day, day of month, hour of day, season, etc. DayOfWeek is already provided separately in the dataset.
Step6: Assembling the feature vector ...
Step7: Step 3 | Python Code:
sc
sc.setLogLevel('INFO')
Explanation: San Francisco Crime Modeling
Go here for the details on the Kaggle competition
Predictive Goal: "Given time and location, you must predict the category of crime that occurred."
Data profiling contained in a separate notebook ("SanFranCrime.ipynb")
End of explanation
parqFileName = '/Users/bill.walrond/Documents/dsprj/data/SanFranCrime/train.pqt'
sfc_train = sqlContext.read.parquet(parqFileName)
print sfc_train.count()
print sfc_train.printSchema()
# sfc_train = sfc_train.cache()
from pyspark.ml import Pipeline
from pyspark.ml.classification import GBTClassifier
from pyspark.ml.feature import StringIndexer, VectorIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
import numpy as np
Explanation: Load the dataset from the prepared Parquet file
End of explanation
# Index labels, adding metadata to the label column.
# Fit on whole dataset to include all labels in index.
labelIndexer = StringIndexer(inputCol="Category", outputCol="indexedLabel").fit(sfc_train)
sfc_train_t = labelIndexer.transform(sfc_train)
# sfc_train_t = sfc_train_t.cache()
# baseline_preds = sfc_train_t.selectExpr('indexedLabel as prediction', 'double(0) as label')
baseline_preds = sfc_train_t.selectExpr('indexedLabel as label', 'double(0) as prediction')
baseline_preds = baseline_preds.cache()
evaluator = MulticlassClassificationEvaluator(predictionCol='prediction')
evaluator.evaluate(baseline_preds)
print 'Precision: {:08.6f}'.format(evaluator.evaluate(baseline_preds, {evaluator.metricName: 'precision'}))
print 'Recall: {:08.6f}'.format(evaluator.evaluate(baseline_preds, {evaluator.metricName: 'recall'}))
Explanation: Step 1: Establish and evaluate a baseline
From the profiling results, the most frequent category of crime by far is "LARCENY/THEFT". We can set our baseline prediction to assume every crime is LARCENY/THEFT regardless of the actual category or any of the other attributes. Then, evaluate how accurate our baseline preditions are. Later, we will compare how much better/worse the machine learning methods are compared to this baseline.
For now, we're going to start with Precision-Recall for our evaluation framework. Later, we may consider additional evaluation metrics (e.g. AUC).
End of explanation
def computeLogLoss(obs, classes, preds):
sumM = 0.0
for n in 1 to numberOfObs: # dataframe agg function
for m in 1 to numberOfClassLabels: # map function
sumM += log(prob(n,m)) if actualLabel(n) == class(m) else 0.0
logLoss = -(sumM/numberOfObs)
Explanation: Thus, our machine learning results must be better than guessing that every category is LARCENY/THEFT.
Todo: program the LogLoss evaluation metric
This is the multi-class version of the metric. Each observation is in one class and for each observation, you submit a predicted probability for each class. The metric is negative the log likelihood of the model that says each test observation is chosen independently from a distribution that places the submitted probability mass on the corresponding class, for each observation.
$$log loss = -\frac{1}{N}\sum_{i=1}^N\sum_{j=1}^My_{i,j}\log(p_{i,j})$$
where N is the number of observations, M is the number of class labels, \(log\) is the natural logarithm, \(y_{i,j}\) is 1 if observation \(i\) is in class \(j\) and 0 otherwise, and \(p_{i,j}\) is the predicted probability that observation \(i\) is in class \(j\).
End of explanation
from pyspark.ml.feature import OneHotEncoder, StringIndexer
from pyspark.mllib.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
cols = ['Descript','DayOfWeek','PdDistrict','Resolution','Address']
for col in cols:
stringIndexer = StringIndexer(inputCol=col, outputCol=col+'Index')
model = stringIndexer.fit(sfc_train)
sfc_train = model.transform(sfc_train)
encoder = OneHotEncoder(dropLast=False, inputCol=col+'Index', outputCol=col+'Vec')
sfc_train = encoder.transform(sfc_train)
print sfc_train.count()
print sfc_train.printSchema()
sfc_train.select('Address','AddressIndex','AddressVec').show(10,truncate=False)
Explanation: Step 2: Prepare the features
Encoding the categorical features ...
End of explanation
import datetime
from pyspark.sql.functions import udf
from pyspark.sql.types import *
udfDateToordinal = udf(lambda dt: dt.toordinal(), LongType())
sfc_train = sfc_train.withColumn('Dates_int',udfDateToordinal(sfc_train.Dates))
print sfc_train.select('Dates','Dates_int').show(3, truncate=False)
Explanation: What do we do about the "Dates" datetime column ? ...
Since we know from the data profiling results that the time span of the data is over 12 years, let's start with converting the Dates column to an ordinal (an integer value representing the number of days since year 1 day 1) and including with the VectorAssembler. After that we'll try transforming the datetime value to year, month, day, day of month, hour of day, season, etc. DayOfWeek is already provided separately in the dataset.
End of explanation
# Use the VectorAssembler to combine the converted Dates column with the ...
# Vectorized categorical column and also with the lat, long columns
vector_cols = ['Dates_int'] + [name for name,type in sfc_train.dtypes if 'Vec' in name ] + ['X','Y']
assembler = VectorAssembler(inputCols=vector_cols, outputCol="features")
sfc_train = assembler.transform(sfc_train)
sfc_train.select('Category','features').show(5,truncate=False)
# trim down to just the columns we need then cache the dataframe, this will help to
# keep the size of thw working dataset more manageable
sfc_train_trimmed = sfc_train.select('Category','features')
sfc_train_trimmed = sfc_train_trimmed.cache()
# write the trimmed DF out to disk, then read it back in
preppedFileName = '/Users/bill.walrond/Documents/dsprj/data/SanFranCrime/prepped.pqt'
sfc_train_trimmed.write.parquet(preppedFileName, mode='overwrite')
# null out all our dataframes
# preppedFileName = '/Users/bill.walrond/Documents/dsprj/data/SanFranCrime/prepped.pqt'
preppedFileName = 's3n://caserta-bucket1/lab/SanFranCrime/prepped.pqt/'
sfc_train = None
predictions = None
model = None
encoder = None
baseline_preds = None
sqlContext.clearCache()
prepped = sqlContext.read.parquet(preppedFileName)
print prepped.count()
print prepped.printSchema()
prepped = prepped.cache()
Explanation: Assembling the feature vector ...
End of explanation
from pyspark.ml import Pipeline
from pyspark.ml.classification import GBTClassifier
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import StringIndexer, VectorIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# Index labels, adding metadata to the label column.
# Fit on whole dataset to include all labels in index.
if "indexedLabel" not in prepped.columns:
labelIndexer = StringIndexer(inputCol="Category", outputCol="indexedLabel").fit(prepped)
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = prepped.randomSplit([0.7, 0.3])
# Train a GBT model.
rf = RandomForestClassifier(labelCol='indexedLabel', featuresCol='features',
# numTrees=30,
numTrees=30,
maxDepth=25,
featureSubsetStrategy='auto')
# Chain indexers and RF in a Pipeline
pipeline = Pipeline(stages=[labelIndexer, rf])
# Train model. This also runs the indexers.
model = pipeline.fit(trainingData)
# Make predictions - returns a DataFrame
predictions = model.transform(testData)
print predictions.printSchema()
# Select example rows to display.
predictions.select("prediction", "indexedLabel", "features").show(5)
predictions = predictions.cache()
predictions = predictions.cache()
# predictions.select("prediction", "indexedLabel", "features").show(10)
predictions.select("prediction").groupBy('prediction').count().show()
eval_preds = predictions.select('prediction','indexedLabel')
eval_preds = eval_preds.cache()
evaluator = MulticlassClassificationEvaluator(predictionCol='prediction', labelCol='indexedLabel')
evaluator.evaluate(eval_preds)
print 'Precision: {:08.6f}'.format(evaluator.evaluate(eval_preds, {evaluator.metricName: 'precision'}))
print 'Recall: {:08.6f}'.format(evaluator.evaluate(eval_preds, {evaluator.metricName: 'recall'}))
# null out all our dataframes
sfc_train = None
predictions = None
model = None
encoder = None
baseline_preds = None
sqlContext.clearCache()
sc.stop()
Explanation: Step 3: Create train and tune sets and fit a model
ToDo: revise the splitting approach to be temporally aware
End of explanation |
15,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
15,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting is an essential skill for Engineers. Plots can reveal trends in data and outliers. Plots are a way to visually communicate results with your engineering team, supervisors and customers. In this post, we are going to plot a couple of trig functions using Python and matplotlib. Matplotlib is a plotting library that can produce line plots, bar graphs, histograms and many other types of plots using Python. Matplotlib is not included in the standard library. If you downloaded Python from python.org, you will need to install matplotlib and numpy with pip on the command line.
```text
pip install matplotlib
pip install numpy
```
If you are using the Anaconda distribution of Python (which is the distribution of Python I recommend for undergraduate engineers) matplotlib and numpy (plus a bunch of other libraries useful for engineers) are included. If you are using Anaconda, you do not need to install any additional packages to use matplotlib.
In this post, we are going to build a couple of plots which show the trig functions sine and cosine. We'll start by importing matplotlib and numpy using the standard lines import matplotlib.pyplot as plt and import numpy as np. This means we can use the short alias plt and np when we call these two libraries. You could import numpy as wonderburger and use wonderburger.sin() to call the numpy sine function, but this would look funny to other engineers. The line import numpy as np has become a common convention and will look familiar to other engineers using Python. In case you are working in a Juypiter notebook, the %matplotlib inline command is also necessary to view the plots directly in the notebook.
Step1: Next we will build a set of x values from zero to 4π in increments of 0.1 radians to use in our plot. The x-values are stored in a numpy array. Numpy's arange() function has three arguments
Step2: To create the plot, we use matplotlib's plt.plot() function. The two arguments are our numpy arrays x and y. The line plt.show() will show the finished plot.
Step3: Next let's build a plot which shows two trig functions, sine and cosine. We will create the same two numpy arrays x and y as before, and add a third numpy array z which is the cosine of x.
Step4: To plot both sine and cosine on the same set of axies, we need to include two pair of x,y values in our plt.plot() arguments. The first pair is x,y. This corresponds to the sine function. The second pair is x,z. This correspons to the cosine function. If you try and only add three arguments as in plt.plot(x,y,z), your plot will not show sine and cosine on the same set of axes.
Step5: Let's build one more plot, a plot which shows the sine and cosine of x and also includes axis labels, a title and a legend. We build the numpy arrays using the trig functions as before
Step6: The plt.plot() call is the same as before using two pairs of x and y values. To add axis labels we will use the following methods | Python Code:
import matplotlib.pyplot as plt
import numpy as np
# if using a jupyter notebook
%matplotlib inline
Explanation: Plotting is an essential skill for Engineers. Plots can reveal trends in data and outliers. Plots are a way to visually communicate results with your engineering team, supervisors and customers. In this post, we are going to plot a couple of trig functions using Python and matplotlib. Matplotlib is a plotting library that can produce line plots, bar graphs, histograms and many other types of plots using Python. Matplotlib is not included in the standard library. If you downloaded Python from python.org, you will need to install matplotlib and numpy with pip on the command line.
```text
pip install matplotlib
pip install numpy
```
If you are using the Anaconda distribution of Python (which is the distribution of Python I recommend for undergraduate engineers) matplotlib and numpy (plus a bunch of other libraries useful for engineers) are included. If you are using Anaconda, you do not need to install any additional packages to use matplotlib.
In this post, we are going to build a couple of plots which show the trig functions sine and cosine. We'll start by importing matplotlib and numpy using the standard lines import matplotlib.pyplot as plt and import numpy as np. This means we can use the short alias plt and np when we call these two libraries. You could import numpy as wonderburger and use wonderburger.sin() to call the numpy sine function, but this would look funny to other engineers. The line import numpy as np has become a common convention and will look familiar to other engineers using Python. In case you are working in a Juypiter notebook, the %matplotlib inline command is also necessary to view the plots directly in the notebook.
End of explanation
x = np.arange(0,4*np.pi,0.1) # start,stop,step
y = np.sin(x)
Explanation: Next we will build a set of x values from zero to 4π in increments of 0.1 radians to use in our plot. The x-values are stored in a numpy array. Numpy's arange() function has three arguments: start, stop, step. We start at zero, stop at 4π and step by 0.1 radians. Then we define a variable y as the sine of x using numpy's sin() function.
End of explanation
plt.plot(x,y)
plt.show()
Explanation: To create the plot, we use matplotlib's plt.plot() function. The two arguments are our numpy arrays x and y. The line plt.show() will show the finished plot.
End of explanation
x = np.arange(0,4*np.pi,0.1) # start,stop,step
y = np.sin(x)
z = np.cos(x)
Explanation: Next let's build a plot which shows two trig functions, sine and cosine. We will create the same two numpy arrays x and y as before, and add a third numpy array z which is the cosine of x.
End of explanation
plt.plot(x,y,x,z)
plt.show()
Explanation: To plot both sine and cosine on the same set of axies, we need to include two pair of x,y values in our plt.plot() arguments. The first pair is x,y. This corresponds to the sine function. The second pair is x,z. This correspons to the cosine function. If you try and only add three arguments as in plt.plot(x,y,z), your plot will not show sine and cosine on the same set of axes.
End of explanation
x = np.arange(0,4*np.pi-1,0.1) # start,stop,step
y = np.sin(x)
z = np.cos(x)
Explanation: Let's build one more plot, a plot which shows the sine and cosine of x and also includes axis labels, a title and a legend. We build the numpy arrays using the trig functions as before:
End of explanation
plt.plot(x,y,x,z)
plt.xlabel('x values from 0 to 4pi') # string must be enclosed with quotes ' '
plt.ylabel('sin(x) and cos(x)')
plt.title('Plot of sin and cos from 0 to 4pi')
plt.legend(['sin(x)', 'cos(x)']) # legend entries as seperate strings in a list
plt.show()
Explanation: The plt.plot() call is the same as before using two pairs of x and y values. To add axis labels we will use the following methods:
| matplotlib method | description | example |
| ----------------- | ----------- | ------- |
| plt.xlabel() | x-axis label | plt.xlabel('x values from 0 to 4pi') |
| plt.ylabel() | y-axis label | plt.ylabel('sin(x) and cos(x)') |
| plt.title() | plot title | plt.title('Plot of sin and cos from 0 to 4pi') |
| plt.legend([ ]) | legend | plt.legend(['sin(x)', 'cos(x)']) |
Note that plt.legend() method requires a list of strings (['string1', 'string2']), where the individual strings are enclosed with qutoes, then seperated by commas and finally inclosed in brackets to make a list. The first string in the list corresponds to the first x-y pair when we called plt.plot() , the second string in the list corresponds to the second x,y pair in the plt.plot() line.
End of explanation |
15,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: word2vec
<img src="http
Step2: Check for understanding
<br>
<details><summary>
How many dimensions are data represented in?
</summary>
<br>
There are 2 dimensions.
</details>
<br>
<details><summary>
How many dimensions would we need to represent for typical word vectors?
</summary>
<br>
<br>
5
<br>
<br>
Typically you would use n-1 word vectors, a baseline word would be coded as all zeros.
</details> | Python Code:
corpus = The man and woman meet each other ...
They become king and queen ...
They got old and stop talking to each other. Instead, they read books and magazines ...
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# Let's hand assign the words to vectors
important_words = ['queen', 'book', 'king', 'magazine', 'woman', 'man']
vectors = np.array([[0.1, 0.3], # queen
[-0.5, -0.1], # book
[0.2, 0.2], # king
[-0.3, -0.2], # magazine
[-0.5, 0.4], # car
[-0.45, 0.3]]) # bike
plt.plot(vectors[:,0], vectors[:,1], 'o')
plt.xlim(-0.6, 0.3)
plt.ylim(-0.3, 0.5)
for word, x, y in zip(important_words, vectors[:,0], vectors[:,1]):
plt.annotate(word, (x, y), size=12)
Explanation: word2vec
<img src="http://billsdata.net/wordpress/wp-content/uploads/2015/11/wikimap2.jpg" style="width: 400px;"/>
Pop Quiz
<br>
<details><summary>
Do computers prefer numbers or words?
</summary>
<br>
<br>
__Numbers__
<br>
<br>
word2vec is currently the best algorithm to map words (strings) to numbers (vectors of floats).
</details>
By The End Of This Notebook You Should Be Able To:
Describe why word2vec is popular and powerful
Explain how word2vec is a neural network
Understand the common architectures of word2vec
Apply word vectors to "do math" on words
Why is word2vec so popular?
Organizes word by semantic meaning.
Turns text into a numerical form that Deep Learning Nets and machine learning algorithms can in-turn use.
How does word2vec work?
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/John_Rupert_Firth.png/220px-John_Rupert_Firth.png" style="width: 400px;"/>
“You shall know a word
by the company it keeps”
- J. R. Firth 1957
Distributional Hypothesis
Words that are used and occur in the same contexts tend to have similar meanings
Example:
... government debt problems are turning into banking crises...
... Europe governments needs unified banking regulation to replace the hodgepodge of debt regulations...
The words: government, regulation and debt probably represent some aspect of banking since they frequently appear near by.
The words: Pokeman and tublar probably don't represent some aspect of banking since they don't frequently appear near by.
How does word2vec model the Distributional Hypothesis?
word2Vec is a very simple neural network:
<img src="images/w2v_neural_net.png" style="width: 400px;"/>
Input = text corpus
Output = vector for each word
word2vec as a compression algorithm
<img src="images/w2v_neural_net.png" style="width: 400px;"/>
Note the bow-tie shape. That is is an autoencoder.
Autoencoders compress sparse representations into dense representation.
Learns the mapping that best preserves the structure of the original space.
Story time...
The man and woman meet each other ...
They become king and queen ...
They got old and stop talking to each other. Instead, they read books and magazines ...
End of explanation
# Encode each word using 1-hot encoding
{'queen': [0, 0, 0, 0, 0],
'book': [0, 0, 0, 0, 1],
'king': [0, 0, 0, 1, 0],
'magazine': [0, 0, 1, 0, 0],
'woman': [0, 1, 0, 0, 0],
'man': [1, 0, 0, 0, 0],
}
Explanation: Check for understanding
<br>
<details><summary>
How many dimensions are data represented in?
</summary>
<br>
There are 2 dimensions.
</details>
<br>
<details><summary>
How many dimensions would we need to represent for typical word vectors?
</summary>
<br>
<br>
5
<br>
<br>
Typically you would use n-1 word vectors, a baseline word would be coded as all zeros.
</details>
End of explanation |
15,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 18 Pre-class assignment
Goals for today's pre-class assignment
In this pre-class assignment, you are going to learn how to
Step1: Question 1
Step2: Question 2 | Python Code:
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("JXJQYpgFAyc",width=640,height=360) # Numerical integration
Explanation: Day 18 Pre-class assignment
Goals for today's pre-class assignment
In this pre-class assignment, you are going to learn how to:
Numerically integrate a function
Numerically differentiate a function
Get a sense of how the result depends on the step size you use.
Assignment instructions
Watch the videos below and complete the assigned programming problems.
End of explanation
# Put your code here
import math
Nstep = 10
begin = 0.0
end = 3.1415926
dx = (end-begin)/Nstep
sum = 0.0
xpos = 0.0
for i in range(Nstep):
thisval = math.sin(xpos)*dx
sum += thisval
xpos += dx
error = abs(sum-2.0)/2.0
print("for dx = {0:3f} we get an answer of {1:3f} and a fractional error of {2:4e}".format(dx,sum,error))
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("b0K8LiHyrBg",width=640,height=360) # Numerical differentiation
Explanation: Question 1: Write a function that uses the rectangle rule to integrates $f(x) = \sin(x)$ from $x_{beg}= 0$ to $x_{end} = \pi$ by taking $N_{step}$ equal-sized steps $\Delta x = \frac{x_{beg} - x_{end}}{N_{step}}$. Allow $N_{step}$ and the beginning and ending of the range to be defined by user-set parameters. For values of $N_{step} = 10, 100$, and $1000$, how close are you to the true answer? (In other words, calculate the fractional error as defined below.)
Note 1: $\int_{0}^{\pi} \sin(x) dx = \left. -\cos(x) \right|_0^\pi = 2$
Note 2: The "error" is defined as $\epsilon = |\frac{I - T}{T}|$, where I is the integrated answer, T is the true (i.e., analytic) answer, and the vertical bars denote that you take the absolute value. This gives you the fractional difference between your integrated answer and the true answer.
End of explanation
# Put your code here
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
def f(x):
return np.exp(-2.0*x)
def dfdx(x):
return -2.0*np.exp(-2.0*x)
x = np.linspace(-3.0,3.0, 100)
dx = 1.0e-2
deriv = (f(x+dx)-f(x-dx))/(2.0*dx)
error = np.abs((deriv-dfdx(x))/dfdx(x))
plt.plot(x,error)
print("the average fractional error is:", error.mean())
Explanation: Question 2: Write a function that calculates the derivative of $f(x) = e^{-2x}$ at several points between -3.0 and 3.0, using two points that are a distance $\Delta x$ from the point, x, where we want the value of the derivative. Calculate the difference between this value and the answer to the analytic solution, $\frac{df}{dx} = -2 e^{-2x}$, for $\Delta x$ = 0.1, 0.01 and 0.001 (in other words, calculate the error as defined above).
Hint: use np.linspace() to create a range of values of x that are regularly-spaced, create functions that correspond to $f(x)$ and $\frac{df}{dx}$, and use numpy to calculate the derivatives and the error. Note that if x is a numpy array, a function f(x) that returns a value will also be a numpy array. In other words, the function:
def f(x):
return np.exp(-2.0*x)
will return an array of values corresponding to the function $f(x)$ defined above if given an array of x values.
End of explanation |
15,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Using DTensors with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Next, import tensorflow and tensorflow.experimental.dtensor, and configure TensorFlow to use 8 virtual CPUs.
Even though this example uses CPUs, DTensor works the same way on CPU, GPU or TPU devices.
Step3: Deterministic pseudo-random number generators
One thing you should note is that DTensor API requires each of the running client to have the same random seeds, so that it could have deterministic behavior for initializing the weights. You can achieve this by setting the global seeds in keras via tf.keras.utils.set_random_seed().
Step4: Creating a Data Parallel Mesh
This tutorial demonstrates Data Parallel training. Adapting to Model Parallel training and Spatial Parallel training can be as simple as switching to a different set of Layout objects. Refer to DTensor in-depth ML Tutorial for more information on distributed training beyond Data Parallel.
Data Parallel training is a commonly used parallel training scheme, also used by for example tf.distribute.MirroredStrategy.
With DTensor, a Data Parallel training loop uses a Mesh that consists of a single 'batch' dimension, where each device runs a replica of the model that receives a shard from the global batch.
Step5: As each device runs a full replica of the model, the model variables shall be fully replicated across the mesh (unsharded). As an example, a fully replicated Layout for a rank-2 weight on this Mesh would be as follows
Step6: A layout for a rank-2 data tensor on this Mesh would be sharded along the first dimension (sometimes known as batch_sharded),
Step7: Create Keras layers with layout
In the data parallel scheme, you usually create your model weights with a fully replicated layout, so that each replica of the model can do calculations with the sharded input data.
In order to configure the layout information for your layers' weights, Keras has exposed an extra parameter in the layer constructor for most of the built-in layers.
The following example builds a small image classification model with fully replicated weight layout. You can specify layout information kernel and bias in tf.keras.layers.Dense via argument kernel_layout and bias_layout. Most of the built-in keras layers are ready for explicitly specifying the Layout for the layer weights.
Step8: You can check the layout information by examining the layout property on the weights.
Step10: Load a dataset and build input pipeline
Load a MNIST dataset and configure some pre-processing input pipeline for it. The dataset itself is not associated with any DTensor layout information. There are plans to improve DTensor Keras integration with tf.data in future TensorFlow releases.
Step11: Define the training logic for the model
Next define the training and evalution logic for the model.
As of TensorFlow 2.9, you have to write a custom-training-loop for a DTensor enabled Keras model. This is to pack the input data with proper layout information, which is not integrated with the standard tf.keras.Model.fit() or tf.keras.Model.eval() functions from Keras. you will get more tf.data support in the upcoming release.
Step12: Metrics and Optimizers
When using DTensor API with Keras Metric and Optimizer, you will need to provide the extra mesh information, so that any internal state variables and tensors can work with variables in the model.
For an optimizer, DTensor introduces a new experimental namespace keras.dtensor.experimental.optimizers, where many existing Keras Optimizers are extended to receive an additional mesh argument. In future releases, it may be merged with Keras core optimizers.
For metrics, you can directly specify the mesh to the constructor as an argument to make it a DTensor compatible Metric.
Step13: Train the model
The following example shards the data from input pipeline on the batch dimension, and train with the model, which has fully replicated weights.
With 3 epochs, the model should achieve about 97% of accuracy.
Step14: Specify Layout for existing model code
Often you have models that work well for your use case. Specifying Layout information to each individual layer within the model will be a large amount of work requiring a lot of edits.
To help you easily convert your existing Keras model to work with DTensor API you can use the new dtensor.LayoutMap API that allow you to specify the Layout from a global point of view.
First, you need to create a LayoutMap instance, which is a dictionary-like object that contains all the Layout you would like to specify for your model weights.
LayoutMap needs a Mesh instance at init, which can be used to provide default replicated Layout for any weights that doesn't have Layout configured. In case you would like all your model weights to be just fully replicated, you can provide empty LayoutMap, and the default mesh will be used to create replicated Layout.
LayoutMap uses a string as key and a Layout as value. There is a behavior difference between a normal Python dict and this class. The string key will be treated as a regex when retrieving the value
Subclassed Model
Consider the following model defined using the Keras subclassing Model syntax.
Step15: There are 4 weights in this model, which are kernel and bias for two Dense layers. Each of them are mapped based on the object path
Step16: The model weights are created on the first call, so call the model with a DTensor input and confirm the weights have the expected layouts.
Step17: With this, you can quickly map the Layout to your models without updating any of your existing code.
Sequential and Functional Models
For keras functional and sequential models, you can use LayoutMap as well.
Note | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install --quiet --upgrade --pre tensorflow tensorflow-datasets
Explanation: Using DTensors with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/dtensor_keras_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
In this tutoral, you will learn how to use DTensor with Keras.
Through DTensor integration with Keras, you can reuse your existing Keras layers and models to build and train distributed machine learning models.
You will train a multi-layer classification model with the MNIST data. Setting the layout for subclassing model, Sequential model, and functional model will be demonstrated.
This tutoral assumes that you have already read the DTensor programing guide, and are familiar with basic DTensor concepts like Mesh and Layout.
This tutoral is based on https://www.tensorflow.org/datasets/keras_example.
Setup
DTensor is part of TensorFlow 2.9.0 release.
End of explanation
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.experimental import dtensor
def configure_virtual_cpus(ncpu):
phy_devices = tf.config.list_physical_devices('CPU')
tf.config.set_logical_device_configuration(
phy_devices[0],
[tf.config.LogicalDeviceConfiguration()] * ncpu)
configure_virtual_cpus(8)
tf.config.list_logical_devices('CPU')
devices = [f'CPU:{i}' for i in range(8)]
Explanation: Next, import tensorflow and tensorflow.experimental.dtensor, and configure TensorFlow to use 8 virtual CPUs.
Even though this example uses CPUs, DTensor works the same way on CPU, GPU or TPU devices.
End of explanation
tf.keras.backend.experimental.enable_tf_random_generator()
tf.keras.utils.set_random_seed(1337)
Explanation: Deterministic pseudo-random number generators
One thing you should note is that DTensor API requires each of the running client to have the same random seeds, so that it could have deterministic behavior for initializing the weights. You can achieve this by setting the global seeds in keras via tf.keras.utils.set_random_seed().
End of explanation
mesh = dtensor.create_mesh([("batch", 8)], devices=devices)
Explanation: Creating a Data Parallel Mesh
This tutorial demonstrates Data Parallel training. Adapting to Model Parallel training and Spatial Parallel training can be as simple as switching to a different set of Layout objects. Refer to DTensor in-depth ML Tutorial for more information on distributed training beyond Data Parallel.
Data Parallel training is a commonly used parallel training scheme, also used by for example tf.distribute.MirroredStrategy.
With DTensor, a Data Parallel training loop uses a Mesh that consists of a single 'batch' dimension, where each device runs a replica of the model that receives a shard from the global batch.
End of explanation
example_weight_layout = dtensor.Layout([dtensor.UNSHARDED, dtensor.UNSHARDED], mesh) # or
example_weight_layout = dtensor.Layout.replicated(mesh, rank=2)
Explanation: As each device runs a full replica of the model, the model variables shall be fully replicated across the mesh (unsharded). As an example, a fully replicated Layout for a rank-2 weight on this Mesh would be as follows:
End of explanation
example_data_layout = dtensor.Layout(['batch', dtensor.UNSHARDED], mesh) # or
example_data_layout = dtensor.Layout.batch_sharded(mesh, 'batch', rank=2)
Explanation: A layout for a rank-2 data tensor on this Mesh would be sharded along the first dimension (sometimes known as batch_sharded),
End of explanation
unsharded_layout_2d = dtensor.Layout.replicated(mesh, 2)
unsharded_layout_1d = dtensor.Layout.replicated(mesh, 1)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128,
activation='relu',
name='d1',
kernel_layout=unsharded_layout_2d,
bias_layout=unsharded_layout_1d),
tf.keras.layers.Dense(10,
name='d2',
kernel_layout=unsharded_layout_2d,
bias_layout=unsharded_layout_1d)
])
Explanation: Create Keras layers with layout
In the data parallel scheme, you usually create your model weights with a fully replicated layout, so that each replica of the model can do calculations with the sharded input data.
In order to configure the layout information for your layers' weights, Keras has exposed an extra parameter in the layer constructor for most of the built-in layers.
The following example builds a small image classification model with fully replicated weight layout. You can specify layout information kernel and bias in tf.keras.layers.Dense via argument kernel_layout and bias_layout. Most of the built-in keras layers are ready for explicitly specifying the Layout for the layer weights.
End of explanation
for weight in model.weights:
print(f'Weight name: {weight.name} with layout: {weight.layout}')
break
Explanation: You can check the layout information by examining the layout property on the weights.
End of explanation
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
def normalize_img(image, label):
Normalizes images: `uint8` -> `float32`.
return tf.cast(image, tf.float32) / 255., label
batch_size = 128
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(batch_size)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(batch_size)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)
Explanation: Load a dataset and build input pipeline
Load a MNIST dataset and configure some pre-processing input pipeline for it. The dataset itself is not associated with any DTensor layout information. There are plans to improve DTensor Keras integration with tf.data in future TensorFlow releases.
End of explanation
@tf.function
def train_step(model, x, y, optimizer, metrics):
with tf.GradientTape() as tape:
logits = model(x, training=True)
# tf.reduce_sum sums the batch sharded per-example loss to a replicated
# global loss (scalar).
loss = tf.reduce_sum(tf.keras.losses.sparse_categorical_crossentropy(
y, logits, from_logits=True))
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for metric in metrics.values():
metric.update_state(y_true=y, y_pred=logits)
loss_per_sample = loss / len(x)
results = {'loss': loss_per_sample}
return results
@tf.function
def eval_step(model, x, y, metrics):
logits = model(x, training=False)
loss = tf.reduce_sum(tf.keras.losses.sparse_categorical_crossentropy(
y, logits, from_logits=True))
for metric in metrics.values():
metric.update_state(y_true=y, y_pred=logits)
loss_per_sample = loss / len(x)
results = {'eval_loss': loss_per_sample}
return results
def pack_dtensor_inputs(images, labels, image_layout, label_layout):
num_local_devices = image_layout.mesh.num_local_devices()
images = tf.split(images, num_local_devices)
labels = tf.split(labels, num_local_devices)
images = dtensor.pack(images, image_layout)
labels = dtensor.pack(labels, label_layout)
return images, labels
Explanation: Define the training logic for the model
Next define the training and evalution logic for the model.
As of TensorFlow 2.9, you have to write a custom-training-loop for a DTensor enabled Keras model. This is to pack the input data with proper layout information, which is not integrated with the standard tf.keras.Model.fit() or tf.keras.Model.eval() functions from Keras. you will get more tf.data support in the upcoming release.
End of explanation
optimizer = tf.keras.dtensor.experimental.optimizers.Adam(0.01, mesh=mesh)
metrics = {'accuracy': tf.keras.metrics.SparseCategoricalAccuracy(mesh=mesh)}
eval_metrics = {'eval_accuracy': tf.keras.metrics.SparseCategoricalAccuracy(mesh=mesh)}
Explanation: Metrics and Optimizers
When using DTensor API with Keras Metric and Optimizer, you will need to provide the extra mesh information, so that any internal state variables and tensors can work with variables in the model.
For an optimizer, DTensor introduces a new experimental namespace keras.dtensor.experimental.optimizers, where many existing Keras Optimizers are extended to receive an additional mesh argument. In future releases, it may be merged with Keras core optimizers.
For metrics, you can directly specify the mesh to the constructor as an argument to make it a DTensor compatible Metric.
End of explanation
num_epochs = 3
image_layout = dtensor.Layout.batch_sharded(mesh, 'batch', rank=4)
label_layout = dtensor.Layout.batch_sharded(mesh, 'batch', rank=1)
for epoch in range(num_epochs):
print("============================")
print("Epoch: ", epoch)
for metric in metrics.values():
metric.reset_state()
step = 0
results = {}
pbar = tf.keras.utils.Progbar(target=None, stateful_metrics=[])
for input in ds_train:
images, labels = input[0], input[1]
images, labels = pack_dtensor_inputs(
images, labels, image_layout, label_layout)
results.update(train_step(model, images, labels, optimizer, metrics))
for metric_name, metric in metrics.items():
results[metric_name] = metric.result()
pbar.update(step, values=results.items(), finalize=False)
step += 1
pbar.update(step, values=results.items(), finalize=True)
for metric in eval_metrics.values():
metric.reset_state()
for input in ds_test:
images, labels = input[0], input[1]
images, labels = pack_dtensor_inputs(
images, labels, image_layout, label_layout)
results.update(eval_step(model, images, labels, eval_metrics))
for metric_name, metric in eval_metrics.items():
results[metric_name] = metric.result()
for metric_name, metric in results.items():
print(f"{metric_name}: {metric.numpy()}")
Explanation: Train the model
The following example shards the data from input pipeline on the batch dimension, and train with the model, which has fully replicated weights.
With 3 epochs, the model should achieve about 97% of accuracy.
End of explanation
class SubclassedModel(tf.keras.Model):
def __init__(self, name=None):
super().__init__(name=name)
self.feature = tf.keras.layers.Dense(16)
self.feature_2 = tf.keras.layers.Dense(24)
self.dropout = tf.keras.layers.Dropout(0.1)
def call(self, inputs, training=None):
x = self.feature(inputs)
x = self.dropout(x, training=training)
return self.feature_2(x)
Explanation: Specify Layout for existing model code
Often you have models that work well for your use case. Specifying Layout information to each individual layer within the model will be a large amount of work requiring a lot of edits.
To help you easily convert your existing Keras model to work with DTensor API you can use the new dtensor.LayoutMap API that allow you to specify the Layout from a global point of view.
First, you need to create a LayoutMap instance, which is a dictionary-like object that contains all the Layout you would like to specify for your model weights.
LayoutMap needs a Mesh instance at init, which can be used to provide default replicated Layout for any weights that doesn't have Layout configured. In case you would like all your model weights to be just fully replicated, you can provide empty LayoutMap, and the default mesh will be used to create replicated Layout.
LayoutMap uses a string as key and a Layout as value. There is a behavior difference between a normal Python dict and this class. The string key will be treated as a regex when retrieving the value
Subclassed Model
Consider the following model defined using the Keras subclassing Model syntax.
End of explanation
layout_map = tf.keras.dtensor.experimental.LayoutMap(mesh=mesh)
layout_map['feature.*kernel'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=2)
layout_map['feature.*bias'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=1)
with tf.keras.dtensor.experimental.layout_map_scope(layout_map):
subclassed_model = SubclassedModel()
Explanation: There are 4 weights in this model, which are kernel and bias for two Dense layers. Each of them are mapped based on the object path:
model.feature.kernel
model.feature.bias
model.feature_2.kernel
model.feature_2.bias
Note: For Subclassed Models, the attribute name, rather than the .name attribute of layer are used as the key to retrieve the Layout from the mapping. This is consistent with the convention followed by tf.Module checkpointing. For complex models with more than a few layers, you can manually inspect checkpoints to see the attribute mappings.
Now define the following LayoutMap and apply it to the model.
End of explanation
dtensor_input = dtensor.copy_to_mesh(tf.zeros((16, 16)), layout=unsharded_layout_2d)
# Trigger the weights creation for subclass model
subclassed_model(dtensor_input)
print(subclassed_model.feature.kernel.layout)
Explanation: The model weights are created on the first call, so call the model with a DTensor input and confirm the weights have the expected layouts.
End of explanation
layout_map = tf.keras.dtensor.experimental.LayoutMap(mesh=mesh)
layout_map['feature.*kernel'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=2)
layout_map['feature.*bias'] = dtensor.Layout.batch_sharded(mesh, 'batch', rank=1)
with tf.keras.dtensor.experimental.layout_map_scope(layout_map):
inputs = tf.keras.Input((16,), batch_size=16)
x = tf.keras.layers.Dense(16, name='feature')(inputs)
x = tf.keras.layers.Dropout(0.1)(x)
output = tf.keras.layers.Dense(32, name='feature_2')(x)
model = tf.keras.Model(inputs, output)
print(model.layers[1].kernel.layout)
with tf.keras.dtensor.experimental.layout_map_scope(layout_map):
model = tf.keras.Sequential([
tf.keras.layers.Dense(16, name='feature', input_shape=(16,)),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(32, name='feature_2')
])
print(model.layers[2].kernel.layout)
Explanation: With this, you can quickly map the Layout to your models without updating any of your existing code.
Sequential and Functional Models
For keras functional and sequential models, you can use LayoutMap as well.
Note: For functional and sequential models, the mappings are slightly different. The layers in the model don't have a public attribute attached to the model (though you can access them via model.layers as a list). Use the string name as the key in this case. The string name is guaranteed to be unique within a model.
End of explanation |
15,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise
Step20: Expected Output | Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape(image.shape[0]*image.shape[1]*image.shape[2], 1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis=1, keepdims = True)
assert(x_norm.shape==(x.shape[0], 1))
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis=1, keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(yhat-y))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = np.dot(y-yhat, y-yhat)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation |
15,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Semidefinite Relaxation of AC Optimal Power Flow Problems
This notebook demonstrates the use of semidefinite relaxation techniques for AC optimal power flow (ACOPF) problems. A description of the problem formulation and its implementation can be found in the following papers
Step1: Building and solving ACOPFs
The following example illustrates how to use opfsdr to download a test case, form a semidefinite relaxation problem, and how solve this problem
Step2: The semidefinite relaxation can be solved with MOSEK as follows
Step3: The objective value associated with the semidefinite relaxation does not include constant cost, and hence it is not necessarily equal to the cost of generation (the cost of generation is available as sol['cost'])
Step4: Eigenvalue ratio(s)
Step5: Generation
Step6: Voltage magnitude constraints
Step7: Flow constraints | Python Code:
import json, re
import requests
testcases = {}
clist = []
# Retrieve list of MATPOWER test cases
response = requests.get('https://api.github.com/repos/MATPOWER/matpower/contents/data')
clist += json.loads(response.text)
# Retrieve list of pglib-opf test cases
response = requests.get('https://api.github.com/repos/power-grid-lib/pglib-opf/contents/')
clist += json.loads(response.text)
response = requests.get('https://api.github.com/repos/power-grid-lib/pglib-opf/contents/api')
clist += json.loads(response.text)
response = requests.get('https://api.github.com/repos/power-grid-lib/pglib-opf/contents/sad')
clist += json.loads(response.text)
# Build list of test cases
for c in clist:
if not c['name'].endswith('.m'): continue
casename = c['name'].split('.')[0]
testcases[casename] = c['download_url']
Explanation: Semidefinite Relaxation of AC Optimal Power Flow Problems
This notebook demonstrates the use of semidefinite relaxation techniques for AC optimal power flow (ACOPF) problems. A description of the problem formulation and its implementation can be found in the following papers:
M. S. Andersen, A. Hansson, L. Vandenberghe, "Reduced-Complexity Semidefinite Relaxations of Optimal Power Flow Problems", IEEE Transactions on Power Systems, 29 (4), pp. 1855–1863, 2014.
A. Eltved, J. Dahl, M. S. Andersen, "On the Robustness and Scalability of Semidefinite Relaxation for Optimal Power Flow Problems", arXiv (math.OC), 2018.
To run this notebook, you will need CVXOPT, CHOMPACK, MOSEK, and Matplotlib.
Test cases
MATPOWER provides a number of test cases that are basically MATLAB m-files, most of which (those that do not perform any computations) can be loaded directly from a file or from a url. For example, the following code
from opfsdr import opf
prob = opf('https://raw.githubusercontent.com/MATPOWER/matpower/master/data/case300.m')
downloads the case300 test case from the MATPOWER repository on Github and generates a semidefinite relaxation problem. Alternatively, using the Github API, a list of test cases and their urls can be retrieved from the MATPOWER repository as follows:
End of explanation
from opfsdr import opf
#case = 'pglib_opf_case1354_pegase__sad'
case = 'case1354pegase'
print("Test case: %s" % case)
options = {'conversion': True, # apply chordal conversion? (default: False)
'tfill': 16, # use clique merging heuristic based on fill (default: 0)
'tsize': 0, # use clique merging heuristic based on clique size (default: 0)
'branch_rmin': 1e-5, # minimum transmission line resistance (default: -inf)
'line_constraints': True, # include apparent power flow constraints in problem formulation
'pad_constraints': True, # include phase angle difference constraints
'truncate_gen_bounds': 1e3, # reduce large generator bounds (default: None)
'verbose': 1, # print info, progress, etc. (default: 0)
'scale': False # apply scaling heuristic to cone LP? (default: False)
}
%time prob = opf(testcases[case], **options)
print(prob)
Explanation: Building and solving ACOPFs
The following example illustrates how to use opfsdr to download a test case, form a semidefinite relaxation problem, and how solve this problem:
End of explanation
# Set MOSEK tolerances
from opfsdr import msk
msk.options[msk.mosek.dparam.intpnt_co_tol_pfeas] = 1e-9 # default: 1e-8
msk.options[msk.mosek.dparam.intpnt_co_tol_dfeas] = 1e-9 # default: 1e-8
msk.options[msk.mosek.dparam.intpnt_co_tol_rel_gap] = 1e-8 # default: 1e-7
# Solve semidefinite relaxation
%time sol = prob.solve(solver="mosek")
print("Solver status: %s" % sol['status'])
Explanation: The semidefinite relaxation can be solved with MOSEK as follows:
End of explanation
print("Generation cost: %.2f USD/hour (fixed cost: %.2f USD/hour)"%(sol['cost'],prob.const_cost))
Explanation: The objective value associated with the semidefinite relaxation does not include constant cost, and hence it is not necessarily equal to the cost of generation (the cost of generation is available as sol['cost']):
End of explanation
%pylab inline
from pylab import plot,xlabel,ylabel,arange,linspace,hist,xscale,rcParams
rcParams['figure.figsize'] = (10, 6)
if len(sol['eigratio']) == 1:
print("Eigenvalue ratio: %.2e" % (sol['eigratio'][0]))
else:
import numpy as np
hist(sol['eigratio'],bins=[10.**a for a in linspace(0,9,100)])
xscale('log')
xlabel('Clique eigenvalue ratio')
ylabel('Frequency')
print("Cliques with eigenvalue ratio less than 1e5: %i of %i"\
% (len([1 for evr in sol['eigratio'] if evr < 1e5]),len(sol['eigratio'])))
Explanation: Eigenvalue ratio(s)
End of explanation
pmin = np.array([gen['Pmin'] for gen in prob.generators])*prob.baseMVA
pmax = np.array([gen['Pmax'] for gen in prob.generators])*prob.baseMVA
hist((np.array(sol['Sg'].real().T).squeeze()-pmin)/(pmax-pmin),20)
xlabel(r'$\frac{P_{\mathrm{g}}-P_{\mathrm{g}}^{\min}}{P_{\mathrm{g}}^{\max} - P_{\mathrm{g}}^{\min}}$')
ylabel('Number of generators');
pmin = np.array([gen['Qmin'] for gen in prob.generators])*prob.baseMVA
pmax = np.array([gen['Qmax'] for gen in prob.generators])*prob.baseMVA
hist((np.array(sol['Sg'].imag().T).squeeze()-pmin)/(pmax-pmin),20)
xlabel(r'$\frac{Q_{\mathrm{g}}-Q_{\mathrm{g}}^{\min}}{Q_{\mathrm{g}}^{\max} - Q_{\mathrm{g}}^{\min}}$')
ylabel('Number of generators');
Explanation: Generation
End of explanation
minvm = np.array([b['minVm'] for b in prob.busses])
maxvm = np.array([b['maxVm'] for b in prob.busses])
hist((np.array(sol['Vm']).squeeze()-minvm)/(maxvm-minvm),20)
xlabel(r'$\frac{|V| - V_{\min}}{V_{\max}-V_{\min}}$')
ylabel('Number of busses');
Explanation: Voltage magnitude constraints
End of explanation
if prob.branches_with_flow_constraints():
Sapp = np.array(abs(sol['St'])).squeeze()
Smax = np.array([br['rateA'] for _,br in prob.branches_with_flow_constraints()])
hist(Sapp/Smax,20)
pyplot.xlabel(r'$\frac{|S_{i,j}|}{S_{i,j}^{\max}}$')
pyplot.ylabel('Number of transmission lines')
else:
print("No flow constraints.")
Explanation: Flow constraints
End of explanation |
15,248 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Train-Test Splitting with Stratification using Scikit-Learn
| Python Code::
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.4,
random_state=101,
stratify=y)
|
15,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HAC variance estimation
Use two data files FwdSpot1.dat and FwdSpot3.dat. The former contains monthly spot and 1-month forward exchange rates, the latter monthly spot and 3-month forward exchange rates, in \$/foreign currency, for the British Pound,
French Franc and Japanese Yen, for 1973
Step1: Read fixed wdth data automatically using Pandas library
Step2: Transform data and create date index
Step3: Plot some data
Step4: Define weighting kernels
Step5: The following function makes a selection of the requested kernel according to name variable.
Step6: HAC variance estimation
Step7: OLS estimation
Step8: Wald test of equality restrictions
Step9: Run regressions and collect results | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
import datetime as dt
from numpy.linalg import inv, lstsq
from scipy.stats import chi2
# For inline pictures
%matplotlib inline
# For nicer output of Pandas dataframes
pd.set_option('float_format', '{:8.2f}'.format)
sns.set_context('notebook')
Explanation: HAC variance estimation
Use two data files FwdSpot1.dat and FwdSpot3.dat. The former contains monthly spot and 1-month forward exchange rates, the latter monthly spot and 3-month forward exchange rates, in \$/foreign currency, for the British Pound,
French Franc and Japanese Yen, for 1973:3 to 1992:8 (234 observations).
Each row contains the month, the year, the spot rates for Pound, Franc, and Yen, and then the forward rates for the same three currencies. Download the data, then take logarithms of the rates.
We are interested in testing the conditional unbiasedness hypothesis that
$$
\mathbb{E}{t}\left[s{t+k}\right]=f_{t,k},
$$
where $s_{t}$ is the spot rate at $t$, $f_{t,k}$ is the forward rate for $k$-month forwards at $t$,
and $\mathbb{E}{t}$ denoted mathematical expectation conditional on time $t$ information.
The above statement says that the forward rate is a conditionally unbiased predictor of the future spot exchange rate.
To test this theory, it is conventional to nest the above expectation hypothesis within the following econometric model:
$$
s{t+k}-s_{t}=\alpha+\beta\left(f_{t,k}-s_{t}\right)+e_{t+k},\quad\mathbb{E}{t}\left[e{t+k}\right]=0,
$$
and test $H_{0}:\alpha=0,\beta=1$. The current spot rate is subtracted to achieve stationarity.
The difference $s_{t+k}-s_{t}$ is called the exchange rate depreciation, the difference $f_{t,k}-s_{t}$ the forward premium.
Do the following exercises for the three currencies comparing the results across the currencies throughout.
For both types of forwards, estimate the model by OLS. Report parameter estimates with appropriate standard errors.
The model at hand is simply a linear model with possibly autocorrelated and heteroscedastic errors. The estimator in general is
$$
\hat{\beta}{T}=\left(X^{\prime}X\right)^{-1}X^{\prime}Y.
$$
It's asymptotic distribution is
$$
\sqrt{T}\left(\hat{\beta}{T}-\beta_{0}\right)\overset{d}{\longrightarrow}
N\left(0,\mathbb{E}\left[X_{t}X_{t}^{\prime}\right]^{-1}
\mathbb{E}\left[e_{t}^{2}X_{t}X_{t}^{\prime}\right]
\mathbb{E}\left[X_{t}X_{t}^{\prime}\right]^{-1}\right)
$$
if the errors are uncorrelated. The asymptotic covariance matrix can be estimated by (White's estimator)
$$
\hat{V}_{\hat{\beta}}
=\left(X^{\prime}X\right)^{-1}\left(X^{\prime}\text{diag}
\left(\hat{e}^{2}\right)X\right)\left(X^{\prime}X\right)^{-1}.
$$
Test conditional unbiasedness using asymptotic theory.
One could use Wald statistic
$$
W_{T}=\left(\hat{\theta}-\theta\right)^{\prime}
\hat{V}{\hat{\theta}}^{-1}\left(\hat{\theta}-\theta\right),
$$
where $\hat{\theta}=r\left(\hat{\beta}\right)$ is the unrestricted estimate and $\theta=r\left(\beta\right)$ is the restriction being tested. The asymptotic distribution of this test statistic is $\chi{q}^{2}$, where $q=\text{rank}\left(\partial r/\partial\beta^{\prime}\right)$. We also know that
$$
\hat{V}{\hat{\theta}}=\hat{R}^{\prime}\hat{V}{\hat{\beta}}\hat{R},
$$
where
$$
\hat{R}=\frac{\partial}{\partial\beta}r\left(\hat{\beta}\right)^{\prime}.
$$
In our case $r\left(\beta\right)=\beta$, $\theta=\left[0,1\right]^{\prime}$, and $R$ is the identity matrix.
With 3-month forwards, try estimation of the long run variance with the Hansen-Hodrick, Newey-West, and Andrews (with the Parzen kernel) estimators.
End of explanation
names = ['Month', 'Year', 'SpotPound', 'SpotFranc', 'SpotYen', 'FwdPound', 'FwdFranc', 'FwdYen']
df1 = pd.read_fwf('../data/ExchangeRates/FwdSpot1.dat', names=names)
df3 = pd.read_fwf('../data/ExchangeRates/FwdSpot3.dat', names=names)
print(df1.head())
print(df3.head())
Explanation: Read fixed wdth data automatically using Pandas library
End of explanation
def date(x):
return dt.date(1900 + int(x['Year']), int(x['Month']), 1)
def transform_data(df):
df['Date'] = df.apply(date, axis=1)
df.set_index('Date', inplace=True)
df.drop(['Year','Month'], axis=1, inplace=True)
df = np.log(df)
return df
df1 = transform_data(df1)
df3 = transform_data(df3)
print(df1.head())
Explanation: Transform data and create date index
End of explanation
df1[['SpotPound', 'SpotFranc', 'SpotYen']].plot(figsize=(10, 8), subplots=True)
plt.show()
Explanation: Plot some data
End of explanation
def White(x):
return 0
def HansenHodrick(x):
if abs(x) <= 1:
return 1
else:
return 0
def Bartlett(x):
if abs(x) <= 1:
return 1 - abs(x)
else:
return 0
def Parzen(x):
if abs(x) <= .5:
return 1 - 6 * x**2 + 6 * abs(x)**3
if abs(x) > .5 and abs(x) <= .5:
return 3 * (1 - abs(x)**3)
else:
return 0
Explanation: Define weighting kernels
End of explanation
def kernel(x, name):
kernels = {'White' : White,
'HansenHodrick' : HansenHodrick,
'Bartlett' : Bartlett,
'Parzen' : Parzen}
return kernels[name](x)
Explanation: The following function makes a selection of the requested kernel according to name variable.
End of explanation
def HAC(e, X, kern):
N = X.shape[0]
q = round(N**(1/5))
for m in range(0, N):
G = np.dot(X[m:].T * e[m:], (X[:N-m].T * e[:N-m]).T)
if m == 0:
S = G
else:
w = kernel(m / q, kern)
S += w * (G + G.T)
Q = inv(np.dot(X.T, X))
V = np.dot(Q, S).dot(Q)
return V
Explanation: HAC variance estimation
End of explanation
def ols(Y, X, kern):
Y = np.array(Y)
X = np.vstack((np.ones_like(X), X)).T
Qxx = np.dot(X.T, X)
Qxy = np.dot(X.T, Y)
# Parameter estimate
beta = np.dot(inv(Qxx), Qxy)
# Residual estimates
e = Y - np.dot(X, beta)
# Estimate of asymptotic variance
V = HAC(e, X, kern)
# Corresponding standard errors
s = np.diag(V) ** .5
# t-statistics
t = beta / s
return beta, V, s, t
Explanation: OLS estimation
End of explanation
def Wald(beta_hat, beta, V):
# Test statistic
W = np.dot(beta_hat - beta, inv(V)).dot((beta_hat - beta).T)
# p-value of the test
p = 1 - chi2.cdf(W, 2)
return W, p
Explanation: Wald test of equality restrictions
End of explanation
# Create lists for all available options
currency = ['Pound', 'Franc', 'Yen']
kernels = ['White', 'HansenHodrick', 'Bartlett', 'Parzen']
# Run over two data sets and correspondig lags
for df, lag in zip([df1, df3], [1, 3]):
# Create dictionary container for the results
tb = dict()
for c in currency:
for kern in kernels:
# Create Y and X according to the model
Y = (df['Spot' + c].diff(lag)).iloc[lag:]
X = (df['Fwd' + c].shift(lag) - df['Spot' + c].shift(lag)).iloc[lag:]
# OLS estimation results
beta, V, s, t = ols(Y, X, kern)
# Restriction on parameters
beta_restr = np.array([0,1])
# Wald test statistic
W, p = Wald(beta, beta_restr, V)
# Dictionary of results
tb[c, kern] = {'alpha*1e2' : beta[0]*1e2, 'beta' : beta[1],
't(alpha)' : t[0], 't(beta)' : t[1],
'W' : W, 'p' : p}
# Convert dictionary to DataFrame
tb = pd.DataFrame(tb).T
print(tb, '\n')
Explanation: Run regressions and collect results
End of explanation |
15,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'besm-2-7', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: INPE
Source ID: BESM-2-7
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
15,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Two-Level
Step2: We'll just check that the pulse area is what we want.
Step3: Solve the Problem
Step4: Plot Output
Step5: Analysis
The $2 \pi$ sech pulse passes through, slowed but with shape unaltered. This is self-induced transparency.
Movie | Python Code:
import numpy as np
SECH_FWHM_CONV = 1./2.6339157938
t_width = 1.0*SECH_FWHM_CONV # [τ]
print('t_width', t_width)
mb_solve_json =
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"rabi_freq_t_args": {
"n_pi": 2.0,
"centre": 0.0,
"width": %f
},
"rabi_freq_t_func": "sech"
}
],
"num_states": 2
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 120,
"z_min": -0.5,
"z_max": 1.5,
"z_steps": 100,
"interaction_strengths": [
10.0
],
"savefile": "mbs-two-sech-2pi"
}
%(t_width)
from maxwellbloch import mb_solve
mb_solve_00 = mb_solve.MBSolve().from_json_str(mb_solve_json)
Explanation: Two-Level: Sech Pulse 2π — Self-Induced Transparency
Define the Problem
First we need to define a sech pulse with the area we want. We'll fix the width of the pulse and the area to find the right amplitude.
The full-width at half maximum (FWHM) $t_s$ of the sech pulse is related to the FWHM of a Gaussian by a factor of $1/2.6339157938$. (See §3.2.2 of my PhD thesis).
End of explanation
print('The input pulse area is {0}'.format(np.trapz(mb_solve_00.Omegas_zt[0,0,:].real,
mb_solve_00.tlist)/np.pi))
Explanation: We'll just check that the pulse area is what we want.
End of explanation
%time Omegas_zt, states_zt = mb_solve_00.mbsolve(recalc=True)
Explanation: Solve the Problem
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
sns.set_style("darkgrid")
fig = plt.figure(1, figsize=(16, 6))
ax = fig.add_subplot(111)
cmap_range = np.linspace(0.0, 1.0, 11)
cf = ax.contourf(mb_solve_00.tlist, mb_solve_00.zlist,
np.abs(mb_solve_00.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.colorbar(cf);
fig, ax = plt.subplots(figsize=(16, 4))
ax.plot(mb_solve_00.zlist, mb_solve_00.fields_area()[0]/np.pi, clip_on=False)
ax.set_ylim([0.0, 8.0])
ax.set_xlabel('Distance ($L$)')
ax.set_ylabel('Pulse Area ($\pi$)');
Explanation: Plot Output
End of explanation
# C = 0.1 # speed of light
# Y_MIN = 0.0 # Y-axis min
# Y_MAX = 4.0 # y-axis max
# ZOOM = 2 # level of linear interpolation
# FPS = 30 # frames per second
# ATOMS_ALPHA = 0.2 # Atom indicator transparency
# FNAME = "images/mb-solve-two-sech-2pi"
# FNAME_JSON = FNAME + '.json'
# with open(FNAME_JSON, "w") as f:
# f.write(mb_solve_json)
# !make-mp4-fixed-frame.py -f $FNAME_JSON -c $C --fps $FPS --y-min $Y_MIN --y-max $Y_MAX \
# --zoom $ZOOM --atoms-alpha $ATOMS_ALPHA #--peak-line --c-line
# FNAME_MP4 = FNAME + '.mp4'
# !make-gif-ffmpeg.sh -f $FNAME_MP4 --in-fps $FPS
# from IPython.display import Image
# Image(url=FNAME_MP4 +'.gif', format='gif')
Explanation: Analysis
The $2 \pi$ sech pulse passes through, slowed but with shape unaltered. This is self-induced transparency.
Movie
End of explanation |
15,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DEM analysis
Load site using shapely
Step1: Load digital elevation model
Step2: Reproject the site to coords of dem for sampling elevation
Step3: Get site elevation
Step4: Not integrated in this notebook yet...
Did the viewshed analysis in QGIS using https
Step5: Solar analysis
Get the azimuth and altitude for the sun for every hour during the year
Step6: Interpolate the DEM derived skyview
Step7: Can the site see the sun???
Step8: 'Effective' hours of direct sun
Filter out hours when sun < 15 degrees above horizon | Python Code:
with open ('inputs/site.geojson') as f:
js = json.load(f)
s = shape(js['features'][0]['geometry'])
s
Explanation: DEM analysis
Load site using shapely
End of explanation
dem = gdal.Open('inputs/dem/filled.tif')
Explanation: Load digital elevation model
End of explanation
site = ogr.Geometry(ogr.wkbPoint) # create an ogr geom instead of shapely
site.AddPoint(s.x, s.y)
sr = dem.GetProjection()
destSR = osr.SpatialReference()
inSRS_converter = osr.SpatialReference()
inSRS_converter.ImportFromWkt(sr)
inSRS_proj4 = inSRS_converter.ExportToProj4()
destSR.ImportFromProj4(inSRS_proj4)
srcSR = osr.SpatialReference()
srcSR.ImportFromProj4("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs")
srTrans = osr.CoordinateTransformation(srcSR,destSR)
site_reproj = site
site.Transform(srTrans)
print(site_reproj.ExportToWkt())
Explanation: Reproject the site to coords of dem for sampling elevation
End of explanation
gt=dem.GetGeoTransform()
rb=dem.GetRasterBand(1)
def get_elev_at_point(geotransform, rasterband, pointgeom):
mx,my=pointgeom.GetX(), pointgeom.GetY()
#Convert from map to pixel coordinates.
#Only works for geotransforms with no rotation.
px = int((mx - geotransform[0]) / geotransform[1]) #x pixel
py = int((my - geotransform[3]) / geotransform[5]) #y pixel
intval=rasterband.ReadAsArray(px,py,1,1)[0]
return intval[0]
site_elev = get_elev_at_point(gt, rb, site_reproj)
site_elev
Explanation: Get site elevation
End of explanation
df = pd.read_csv('inputs/viewshed/horizon_dist.csv')
df_skyview = df.set_index('bearing_from_site')
df_skyview.plot(y='inclination_from_site', style='.', figsize=(18,6), xlim=(0, 360))
pd.set_option('display.float_format', lambda x: '%.3f' % x)
df_skyview.loc[df_skyview['inclination_from_site'].idxmax()]
Explanation: Not integrated in this notebook yet...
Did the viewshed analysis in QGIS using https://github.com/zoran-cuckovic/QGIS-visibility-analysis
SAGA or GRASS could work here
Options:
- horizon search radius 20 km
- target (site) height 2 meters above surface
- adapting out 3 pixels (15 meters/~50 feet) from site to find high spot
Kept working in qgis for speed, will convert to this notebook eventually
In QGIS the horizon was:
- converted to ascii
- imported as points
Then:
- the dem was sampled with the horizon pts
- distance to site from each point calculated (m)
- inclination angle to each point calculation: degrees(atan((horizon_elev-site_elev)/distance)
Bearing calculation:
degrees(atan(($x_at(0)-site_lon)/($y_at(0)-site_lat))) + (180 *((($y_at(0)-site_lat) < 0) + ((($x_at(0)-site_lon) < 0 AND ($y_at(0)-site_lat) >0)*2)))
End of explanation
panel = ephem.Observer()
panel.lon = str(s.x) # this needs to be a string. derp
panel.lat = str(s.y)
panel.elevation = site_elev+2 # +2meters stays consistent with viewshed analysis
# panel.date = '2017/05/17 20:30:00' # utc time
panel
from pytz import all_timezones, common_timezones
local = pytz.timezone('America/Anchorage')
def get_sunangles(time_utc, observer):
observer.date = time_utc
solar = ephem.Sun(observer)
a = {}
a['az'] = math.degrees(solar.az)
a['alt'] = math.degrees(solar.alt)
return a
get_sunangles(datetime.datetime.utcnow(), panel)
df_sun = pd.DataFrame(index=pd.date_range('2018-01-01 00:00', '2018-12-31 23:59', freq='5min', tz=local))
samples_per_hour=12 # adjust based on freq
df_sun['ts_utc'] = (df_sun.index).tz_convert(pytz.timezone('UTC'))
df_sun['az'] = df_sun.apply(lambda row: get_sunangles(row['ts_utc'], panel)['az'], axis=1)
df_sun['alt'] = df_sun.apply(lambda row: get_sunangles(row['ts_utc'], panel)['alt'], axis=1)
# df_sun.alt.plot(figsize=(18,6))
Explanation: Solar analysis
Get the azimuth and altitude for the sun for every hour during the year
End of explanation
skyview_bearings = df_skyview.index.tolist()
azimuths = df_sun.az.tolist()
all_bearings = list(skyview_bearings)
all_bearings.extend(x for x in azimuths if x not in all_bearings)
df_allbearings = df_skyview.reindex(all_bearings)
df_allbearings.sort_index(inplace=True)
df_allbearings = df_allbearings.inclination_from_site.interpolate().ffill().bfill()
# df_allbearings.plot(kind='area', figsize=(18,6))
Explanation: Interpolate the DEM derived skyview
End of explanation
df_viz = df_sun.join(df_allbearings, on='az')
df_viz['visible'] = False
df_viz.loc[df_viz['alt'] > df_viz['inclination_from_site'], 'visible'] = True
series_dailysum = df_viz.groupby(pd.TimeGrouper(freq='D'))['visible'].sum()/samples_per_hour
series_dailysum.plot(figsize=(18,6), style='.')
series_monthlysum = df_viz.groupby(pd.TimeGrouper(freq='M'))['visible'].sum()/samples_per_hour
series_monthlysum.plot(kind='bar', figsize=(18,6))
series_dailysum.to_csv(path='dailysum.csv', index=True, date_format='%Y-%m-%d', float_format='%.2f')
series_monthlysum.to_csv(path='monthlysum.csv', index=True, date_format='%B', float_format='%.2f')
series_monthlysum
Explanation: Can the site see the sun???
End of explanation
min_alt = 15.0
df_viz['effective'] = False
df_viz.loc[(df_viz['alt'] > min_alt), 'effective'] = True
df_viz['above_min_alt'] = False
df_viz.loc[(df_viz['visible'] == True) & (df_viz['effective'] == True), 'above_min_alt'] = True
series_abovemin_dailysum = df_viz.groupby(pd.TimeGrouper(freq='D'))['effective'].sum()/samples_per_hour
series_effec_dailysum = df_viz.groupby(pd.TimeGrouper(freq='D'))['above_min_alt'].sum()/samples_per_hour
series_effec_monthlysum = df_viz.groupby(pd.TimeGrouper(freq='M'))['above_min_alt'].sum()/samples_per_hour
series_effec_monthlysum
df_dailysums = pd.concat([series_dailysum, series_effec_dailysum, series_abovemin_dailysum], axis=1)
df_dailysums.columns = ['Sun visible above terrain', 'Sun visible and effective', 'Sun above 15 degrees']
fig, ax = plt.subplots(1, 1, figsize=(18,6))
df_dailysums.plot(ax=ax)
ax.set(xlabel='Date',
ylabel='Hours per day')
ax.legend(loc=2)
fig.autofmt_xdate()
fig, ax = plt.subplots(1, 1, figsize=(18,6))
df_dailysums['Sun above 15 degrees'].sub(df_dailysums['Sun visible and effective'], axis=0).plot(ax=ax)
ax.set(Title='Hours effective solar blocked by terrain')
Explanation: 'Effective' hours of direct sun
Filter out hours when sun < 15 degrees above horizon
End of explanation |
15,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'icon-esm-lr', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MPI-M
Source ID: ICON-ESM-LR
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
15,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <a id='top'></a>
Demonstration of the filters available in scipy.signal
This notebook is not intended to replace the SciPy reference guide but to serve only as a one stop shop for the filter functions available in the signal processing module (see http
Step2: We are almost ready to start exploring the filters available at scipy.signal (http
Step3: Back to top
Butterworth digital and analog filter
Butterworth filters, signal.butter function signature
Step4: Back to top
Chebyshev type I digital and analog filter
Chebyshev type I filters, signal.cheby1 function signature
Step5: Back to top
Chebyshev type II digital and analog filter
Chebyshev type II filters, signal.cheby2 function signature
Step6: Back to top
Elliptic (Cauer) digital and analog filter
Elliptic (Cauer) filters, signal.ellip function signature
Step7: Back to top
Bessel/Thomson digital and analog filter
Bessel/Thomson filters, signal.bessel function signature
Step8: Back to top
Notch (band-stop) and Peak (band-pass) digital filter
Notch filters, signal.iirnotch function signature
Peak filters, signal.iirpeak function signature | Python Code:
# IPython magic commands
%matplotlib inline
# Python standard library
import sys
import os.path
# 3rd party modules
import numpy as np
import scipy as sp
import matplotlib as mpl
from scipy import signal
import matplotlib.pyplot as plt
print(sys.version)
for module in (np, sp, mpl):
print('{:.<15}{}'.format(module.__name__, module.__version__))
def format_plots(ax, filter_name, column_name=('digital', 'analog')):
Format Bode plots
for column in range(2):
ax[0][column].set_title(f'{filter_name} {column_name[column]} filter')
ax[0][column].set_xlabel('Frequency [Hz]')
ax[0][column].set_ylabel('Amplitude [dB]')
ax[0][column].margins(0, 0.1)
ax[0][column].grid(b=True, which='both', axis='both')
ax[0][column].legend()
ax[1][column].set_xlabel('Frequency [Hz]')
ax[1][column].set_ylabel('Phase [degrees]')
ax[1][column].margins(0, 0.1)
ax[1][column].grid(b=True, which='both', axis='both')
ax[1][column].legend()
Explanation: <a id='top'></a>
Demonstration of the filters available in scipy.signal
This notebook is not intended to replace the SciPy reference guide but to serve only as a one stop shop for the filter functions available in the signal processing module (see http://docs.scipy.org/doc/scipy/reference/signal.html for detailed information). Alternative sources of information can be found in this Wikipedia article and elsewhere (e.g. LabVIEW, MATLAB).
Table of contents
Preamble
Butterworth digital and analog filter
Chebyshev type I digital and analog filter
Chebyshev type II digital and analog filter
Elliptic (Cauer) digital and analog filter
Bessel/Thomson digital and analog filter
Notch (band-stop) and Peak (band-pass) digital filter
Odds and ends
Preamble
Before we can start demonstrating the filters, we have to setup the computational environment for this Python notebook:
End of explanation
fs = 200. # sampling frequency (Hz)
fc = 50. # critical frequency (Hz)
wc = fc/(fs/2.) # normalized frequency (half-cycles/sample)
Explanation: We are almost ready to start exploring the filters available at scipy.signal (http://docs.scipy.org/doc/scipy/reference/signal.html). In all of them we will consider the following base values:
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(18,12))
for order in range(3):
# digital filter
b, a = signal.butter(2*(order+1), wc, analog=False)
w, h = signal.freqz(b, a)
ax[0][0].plot(w*fs/(2*np.pi), 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][0].axvline(fc, color='green')
ax[1][0].plot(w*fs/(2*np.pi), np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][0].axvline(fc, color='green')
# analog filter
b, a = signal.butter(2*(order+1), fc, analog=True)
w, h = signal.freqs(b, a)
ax[0][1].semilogx(w, 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][1].axvline(fc, color='green')
ax[1][1].semilogx(w, np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][1].axvline(fc, color='green')
format_plots(ax, 'Butterworth')
Explanation: Back to top
Butterworth digital and analog filter
Butterworth filters, signal.butter function signature
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(18,12))
rp = 5 # maximum ripple allowed below unity gain in the passband, specified in decibels as a positive number
for order in range(3):
# digital filter
b, a = signal.cheby1(2*(order+1), rp, wc, analog=False)
w, h = signal.freqz(b, a)
ax[0][0].plot(w*fs/(2*np.pi), 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][0].axvline(fc, color='green')
ax[0][0].axhline(-rp, color='green')
ax[1][0].plot(w*fs/(2*np.pi), np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][0].axvline(fc, color='green')
# analog filter
b, a = signal.cheby1(2*(order+1), rp, fc, analog=True)
w, h = signal.freqs(b, a)
ax[0][1].semilogx(w, 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][1].axvline(fc, color='green')
ax[0][1].axhline(-rp, color='green')
ax[1][1].semilogx(w, np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][1].axvline(fc, color='green')
format_plots(ax, f'Chebyshev Type I (rp={rp})')
Explanation: Back to top
Chebyshev type I digital and analog filter
Chebyshev type I filters, signal.cheby1 function signature
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(18,12))
rs = 40 # minimum attenuation required in the stop band, specified in decibels as a positive number
for order in range(3):
# digital filter
b, a = signal.cheby2(2*(order+1), rs, wc, analog=False)
w, h = signal.freqz(b, a)
ax[0][0].plot(w*fs/(2*np.pi), 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][0].axvline(fc, color='green')
ax[0][0].axhline(-rs, color='green')
ax[1][0].plot(w*fs/(2*np.pi), np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][0].axvline(fc, color='green')
# analog filter
b, a = signal.cheby2(2*(order+1), rs, fc, analog=True)
w, h = signal.freqs(b, a)
ax[0][1].semilogx(w, 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][1].axvline(fc, color='green')
ax[0][1].axhline(-rs, color='green')
ax[1][1].semilogx(w, np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][1].axvline(fc, color='green')
format_plots(ax, f'Chebyshev Type II (rs={rs})')
Explanation: Back to top
Chebyshev type II digital and analog filter
Chebyshev type II filters, signal.cheby2 function signature
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(18,12))
rp = 5 # maximum ripple allowed below unity gain in the passband, specified in decibels as a positive number
rs = 40 # minimum attenuation required in the stop band, specified in decibels as a positive number
for order in range(3):
# digital filter
b, a = signal.ellip(2*(order+1), rp, rs, wc, analog=False)
w, h = signal.freqz(b, a)
ax[0][0].plot(w*fs/(2*np.pi), 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][0].axvline(fc, color='green')
ax[0][0].axhline(-rp, color='green')
ax[0][0].axhline(-rs, color='green')
ax[1][0].plot(w*fs/(2*np.pi), np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][0].axvline(fc, color='green')
# analog filter
b, a = signal.ellip(2*(order+1), rp, rs, fc, analog=True)
w, h = signal.freqs(b, a)
ax[0][1].semilogx(w, 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][1].axvline(fc, color='green')
ax[0][1].axhline(-rp, color='green')
ax[0][1].axhline(-rs, color='green')
ax[1][1].semilogx(w, np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][1].axvline(fc, color='green')
format_plots(ax, f'Elliptic (rp={rp}, rs={rs})')
Explanation: Back to top
Elliptic (Cauer) digital and analog filter
Elliptic (Cauer) filters, signal.ellip function signature
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(18,12))
for order in range(3):
# digital filter
b, a = signal.bessel(2*(order+1), wc, analog=False)
w, h = signal.freqz(b, a)
ax[0][0].plot(w*fs/(2*np.pi), 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][0].axvline(fc, color='green')
ax[1][0].plot(w*fs/(2*np.pi), np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][0].axvline(fc, color='green')
# analog filter
b, a = signal.bessel(2*(order+1), fc, analog=True)
w, h = signal.freqs(b, a)
ax[0][1].semilogx(w, 20 * np.log10(abs(h)), label=f'order={2*(order+1)}')
ax[0][1].axvline(fc, color='green')
ax[1][1].semilogx(w, np.unwrap(np.angle(h))*180/np.pi, label=f'order={2*(order+1)}')
ax[1][1].axvline(fc, color='green')
format_plots(ax, 'Bessel')
Explanation: Back to top
Bessel/Thomson digital and analog filter
Bessel/Thomson filters, signal.bessel function signature
End of explanation
fig, ax = plt.subplots(2, 2, figsize=(18,12))
for Q in range(3): # quality factor
# Notch filter
b, a = signal.iirnotch(wc, 10*(Q+1))
w, h = signal.freqz(b, a)
ax[0][0].plot(w*fs/(2*np.pi), 20 * np.log10(abs(h)), label=f'Q={10*(Q+1)}')
ax[0][0].axvline(fc, color='green')
ax[1][0].plot(w*fs/(2*np.pi), np.unwrap(np.angle(h))*180/np.pi, label=f'Q={10*(Q+1)}')
ax[1][0].axvline(fc, color='green')
# Peak filter
b, a = signal.iirpeak(wc, 10*(Q+1))
w, h = signal.freqz(b, a)
ax[0][1].plot(w*fs/(2*np.pi), 20 * np.log10(abs(h)), label=f'Q={10*(Q+1)}')
ax[0][1].axvline(fc, color='green')
ax[1][1].plot(w*fs/(2*np.pi), np.unwrap(np.angle(h))*180/np.pi, label=f'Q={10*(Q+1)}')
ax[1][1].axvline(fc, color='green')
format_plots(ax, 'Digital', column_name=('notch', 'peak'))
Explanation: Back to top
Notch (band-stop) and Peak (band-pass) digital filter
Notch filters, signal.iirnotch function signature
Peak filters, signal.iirpeak function signature
End of explanation |
15,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames Hernandez2014
Title
Step1: Table 4 - Low Resolution Analysis | Python Code:
%pylab inline
import seaborn as sns
sns.set_context("notebook", font_scale=1.5)
import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
Explanation: ApJdataFrames Hernandez2014
Title: A SPECTROSCOPIC CENSUS IN YOUNG STELLAR REGIONS: THE σ ORIONIS CLUSTER
Authors: Jesus Hernandez, Nuria Calvet, Alice Perez, Cesar Briceno, Lorenzo Olguin, Maria E Contreras, Lee Hartmann, Lori E Allen, Catherine Espaillat, and Ramírez Hernan
Data is from this paper:
http://iopscience.iop.org/0004-637X/794/1/36/article
End of explanation
tbl4 = ascii.read("http://iopscience.iop.org/0004-637X/794/1/36/suppdata/apj500669t4_mrt.txt")
tbl4[0:4]
Na_mask = ((tbl4["f_EWNaI"] == "Y") | (tbl4["f_EWNaI"] == "N"))
print "There are {} sources with Na I line detections out of {} sources in the catalog".format(Na_mask.sum(), len(tbl4))
tbl4_late = tbl4[['Name', '2MASS', 'SpType', 'e_SpType','EWHa', 'f_EWHa', 'EWNaI', 'e_EWNaI', 'f_EWNaI']][Na_mask]
tbl4_late.pprint(max_lines=100, )
Explanation: Table 4 - Low Resolution Analysis
End of explanation |
15,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ввод/Вывод в ipython notebook
Пользовательский ввод осуществляется использованием конструкции input
Step1: Вывод одной переменной
Для вывода одной переменной на блок достаточно написать имя этой переменной
Step2: Если необходимо вывести несколько переменных, то лучше использовать конструкцию print
Step3: Тип данных список
Списки в Python - упорядоченные изменяемые коллекции объектов произвольных типов (очень похоже на массив, однако, список позволяет хранить разные типы данных).
Инициализация через list() или []
Step4: Генераторы списков
Генераторы списков - способ построить новый список, применяя выражение к каждому элементу последовательности
Step5: Методы работы со списками
list.append(x) - Добавление элемента в конец списка
Step6: list.extend(L) - Расширение списка L добавлением в конец всех элементов списка L
Step7: list.insert(i, x) - Вставка на i-ый элемент значения x
Step8: list.remove(x) - удаление первого элемента в списке со значением x. ValueError, если такого элемента не существует.
Step9: list.pop([i]) - Удаляет i-ый элемент и удаляет его. Если индекс не указан, удаляется последний элемент
Step10: list.index(x, [start [, end]]) - возвращает положение первого элемента со значением x
(при этом поиск введется от start до end)
Возвращает
Step11: list.count(x) - возвращает количество элементов со значением x
Step12: list.sort([key=функция], [reverse=True])
Step13: list.reverse() - разворачивание списка.
Step14: list.copy() - поверхностная копия списка
Step15: list.clear() - очистка списка
Step16: Функции
Функция в python - объект, принимающий аргументы и возвращающий значение. Обычно функция определяется с помощью инструкции def.
Step17: Функция также может принимать переменное количество позиционных аргументов, тогда перед именем ставится *
Step18: Анонимные функции
Step19: То же самое, что и | Python Code:
a = input("Enter new name:")
Explanation: Ввод/Вывод в ipython notebook
Пользовательский ввод осуществляется использованием конструкции input
End of explanation
a
Explanation: Вывод одной переменной
Для вывода одной переменной на блок достаточно написать имя этой переменной
End of explanation
b = input("Enter b:")
print(a,b)
Explanation: Если необходимо вывести несколько переменных, то лучше использовать конструкцию print
End of explanation
a = ["список"]
b = list("список")
print("a=", a)
print("b={}".format(b))
Explanation: Тип данных список
Списки в Python - упорядоченные изменяемые коллекции объектов произвольных типов (очень похоже на массив, однако, список позволяет хранить разные типы данных).
Инициализация через list() или []
End of explanation
a = [1,2,3]
b = [x**2 for x in a]
print(a)
print(b)
Explanation: Генераторы списков
Генераторы списков - способ построить новый список, применяя выражение к каждому элементу последовательности
End of explanation
a = [1, 2, 3]
print(a)
a.append(4)
print(a)
Explanation: Методы работы со списками
list.append(x) - Добавление элемента в конец списка
End of explanation
a = [1, 2, 3]
b = [4, 6]
print(a)
a.extend(b)
print(a)
Explanation: list.extend(L) - Расширение списка L добавлением в конец всех элементов списка L
End of explanation
a = [1, 2, 3]
print(a)
a.insert(1, "a")
print(a)
Explanation: list.insert(i, x) - Вставка на i-ый элемент значения x
End of explanation
a =[2, 1, 3, 1, 2, 1]
print(a)
a.remove(1)
print(a)
a = [3, 4, 5]
print(a)
a.remove(2)
a = [3, 4, 5]
print(a)
try:
a.remove(2)
except ValueError:
print("Что-то пошло не так")
Explanation: list.remove(x) - удаление первого элемента в списке со значением x. ValueError, если такого элемента не существует.
End of explanation
a = [3, 4, 5, 6, 7, 8]
print(a)
b = a.pop()
print(a)
print(b)
b = a.pop(1)
print(a)
print(b)
Explanation: list.pop([i]) - Удаляет i-ый элемент и удаляет его. Если индекс не указан, удаляется последний элемент
End of explanation
a = [3, 4, 5, 6, 33, 3, 9]
print(a.index(3))
print(a.index(3, 3))
print(a.index(3, 3, 5))
Explanation: list.index(x, [start [, end]]) - возвращает положение первого элемента со значением x
(при этом поиск введется от start до end)
Возвращает
End of explanation
a = [1, 2, 3, 4, 1]
print(a.count(1))
print(a.count(4))
print(a.count(6))
Explanation: list.count(x) - возвращает количество элементов со значением x
End of explanation
a = [1, 2, 3, 4]
a.sort()
print(a)
a.sort(reverse=True)
print(a)
a.sort(key = lambda x: (1/2-x%2)* x)
print(a)
sorted(a, reverse=False)
print(a)
Explanation: list.sort([key=функция], [reverse=True])
End of explanation
a = [1, 6, 7, 2]
a.reverse()
print(a)
Explanation: list.reverse() - разворачивание списка.
End of explanation
a = [1, 2, 3]
b = a
print(a, b)
a.append(4)
print(a, b)
a = [1, 2, 3]
b = a.copy()
print(a, b)
a.append(4)
print(a, b)
Explanation: list.copy() - поверхностная копия списка
End of explanation
a = [1, 2, 3]
a.clear()
a
Explanation: list.clear() - очистка списка
End of explanation
def add(x,y):
return x + y
add(1,2)
add(1,2)
add('a', 'b')
add([1,2,3], [2,3])
Explanation: Функции
Функция в python - объект, принимающий аргументы и возвращающий значение. Обычно функция определяется с помощью инструкции def.
End of explanation
def func(*argc):
return argc
func(1, 2, 3, 4)
def add_new(*argc):
return sum([x for x in argc])
add_new(1,2,3)
add_new('a', 'b', 'c', 'd')
def add_new2(*argc):
if argc is ():
return ()
res = argc[0]
for i in argc[1:]:
res += i
return res
add_new2()
add_new2(1,2,3)
add_new2('a', 'b', 'c')
Explanation: Функция также может принимать переменное количество позиционных аргументов, тогда перед именем ставится *:
End of explanation
func1 = lambda x, y: x + y
func1(1,2)
Explanation: Анонимные функции
End of explanation
def func2(x,y):
return x + y
func2(1,2)
Explanation: То же самое, что и
End of explanation |
15,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary
Notes taken to help for the first project for the Deep Learning Foundations Nanodegree course dellivered by Udacity.
My Github repo for this project can be found here
Step1: Change Log
Date Created
Step2: [Top]
Neural network
Output Formula
Synonym
The predicted value
The prediction
\begin{equation}
\hat y_j^\mu = f \left(\Sigma_i w_{ij} x_i^\mu\right)
\end{equation}
Intuition
<img src="../../../../images/simple-nn.png",width=450,height=200>
AND / OR perceptron
<img src="../../../../images/and-or-perceptron.png",width=450,height=200>
NOT perceptron
The NOT operations only cares about one input. The other inputs to the perceptron are ignored.
XOR perceptron
An XOR perceptron is a logic gate that outputs 0 if the inputs are the same and 1 if the inputs are different.
<img src="../../../../images/xor-perceptron.png",width=450,height=200>
Activation functions
AF Summary
Activation functions can be for
* Binary outcomes (2 classes, e.g {True, False})
* Multiclass outcomes
Binary activation functions include
Step3: [Top]
Example
Step4: [Top]
Tanh
Synonyms
Step5: [Top]
Example
Step6: [Top]
Alternative Example
Step7: [Top]
Softmax
Synonyms
Step8: [Top]
Gradient Descent
Learning weights
What if you want to perform an operation, such as predicting college admission, but don't know the correct weights? You'll need to learn the weights from example data, then use those weights to make the predictions.
We need a metric of how wrong the predictions are, the error.
Sum of squared errors (SSE)
\begin{equation}
E =
\frac{1}{2} \Sigma_u \Sigma_j \left [ y_j ^ \mu - \hat y_j^ \mu \right ] ^ 2
\end{equation}
where (neural network prediction)
Step9: [Top]
Caveat
Gradient descent is reliant on beginnning weight values. If incorrect could result in convergergance occuring in a local minima, not a global minima. Random weights can be used.
Momentum Term
The momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation.
[Top]
Multilayer Perceptrons
Synonyms
MLP (just an acronym)
Numpy column vector
Numpy arays are row vectors by default, and the input_features.T (transpose) transform still leaves it as a row vector. Instead we have to use (use this one, makes more sense) | Python Code:
%run ../../../code/version_check.py
Explanation: Summary
Notes taken to help for the first project for the Deep Learning Foundations Nanodegree course dellivered by Udacity.
My Github repo for this project can be found here: adriantorrie/udacity_dlfnd_project_1
Table of Contents
Neural network
Output Formula
Intuition
AND / OR perceptron
NOT perceptron
XOR perceptron
Activation functions
Summary
Deep Learning Book extra notes from Chapter 6: Deep Feedforward Networks
Activation Formula
Sigmoid
Tanh
Tanh Alternative Formula
Softmax
Gradient Descent
Multilayer Perceptrons
Backpropogation
Additional Reading
Additional Videos
Version Control
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
plt.style.use('bmh')
matplotlib.rcParams['figure.figsize'] = (15, 4)
Explanation: Change Log
Date Created: 2017-02-06
Date of Change Change Notes
-------------- ----------------------------------------------------------------
2017-02-06 Initial draft
2017-03-23 Formatting changes for online publishing
Setup
End of explanation
def sigmoid(x):
s = 1 / (1 + np.exp(-x))
return s
inputs = np.array([2.1, 1.5,])
weights = np.array([0.2, 0.5,])
bias = -0.2
output = sigmoid(np.dot(weights, inputs) + bias)
print(output)
Explanation: [Top]
Neural network
Output Formula
Synonym
The predicted value
The prediction
\begin{equation}
\hat y_j^\mu = f \left(\Sigma_i w_{ij} x_i^\mu\right)
\end{equation}
Intuition
<img src="../../../../images/simple-nn.png",width=450,height=200>
AND / OR perceptron
<img src="../../../../images/and-or-perceptron.png",width=450,height=200>
NOT perceptron
The NOT operations only cares about one input. The other inputs to the perceptron are ignored.
XOR perceptron
An XOR perceptron is a logic gate that outputs 0 if the inputs are the same and 1 if the inputs are different.
<img src="../../../../images/xor-perceptron.png",width=450,height=200>
Activation functions
AF Summary
Activation functions can be for
* Binary outcomes (2 classes, e.g {True, False})
* Multiclass outcomes
Binary activation functions include:
* Sigmoid
* Hyperbolic tangent (and the alternative formula provided by LeCun et el, 1998)
* Rectified linear unit
Multi-class activation functions include:
* Softmax
[Top]
Taken from Deep Learning Book - Chapter 6: Deep Feedforward Networks:
6.2.2 Output Units
* Any kind of neural network unit that may be used as an output can also be used as a hidden unit.
6.3 Hidden Units
* Rectified linear units are an excellent default choice of hidden unit. (My note: They are not covered in week one)
6.3.1 Rectified Linear Units and Their Generalizations
* g(z) = max{0, z}
* One drawback to rectified linear units is that they cannot learn via gradient-based methods on examples for which their activation is zero.
* Maxout units generalize rectified linear units further.
* Maxout units can thus be seen as learning the activation function itself rather than just the relationship between units.
* Maxout units typically need more regularization than rectified linear units. They can work well without regularization if the training set is large and the number of pieces per unit is kept low.
* Rectified linear units and all of these generalizations of them are based on the principle that models are easier to optimize if their behavior is closer to linear.
6.3.2 Logistic Sigmoid and Hyperbolic Tangent
* ... use as hidden units in feedforward networks is now discouraged.
* Sigmoidal activation functions are more common in settings other than feed-forward networks. Recurrent networks, many probabilistic models, and some auto-encoders have additional requirements that rule out the use of piecewise linear activation functions and make sigmoidal units more appealing despite the drawbacks of saturation.
Note: Saturation as an issue is also in the extra reading, Yes, you should understand backprop, and also raised in Effecient Backprop, LeCun et el., 1998 and was one of the reasons cited for modifying the tanh function in this notebook.
6.6 Historical notes
* The core ideas behind modern feedforward networks have not changed substantially since the 1980s. The same back-propagation algorithm and the same approaches to gradient descent are still in use
* Most of the improvement in neural network performance from 1986 to 2015 can be attributed to two factors.
* larger datasets have reduced the degree to which statistical generalization is a challenge for neural networks
* neural networks have become much larger, due to more powerful computers, and better software infrastructure
* However, a small number of algorithmic changes have improved the performance of neural networks
* ... replacement of mean squared error with the cross-entropy family of loss functions. Cross-entropy losses greatly improved the performance of models with sigmoid and softmax outputs, which had previously suffered from saturation and slow learning when using the mean squared error loss
* ... replacement of sigmoid hidden units with piecewise linear hidden units, such as rectified linear units
[Top]
Activation Formula
\begin{equation}
a = f(x) = {sigmoid, tanh, softmax, \text{or some other function not listed in this set}}
\end{equation}
where:
* $a$ is the activation function transformation of the output from $h$, e.g. apply the sigmoid function to $h$
and
\begin{equation}
h = \Sigma_i w_i x_i + b
\end{equation}
where:
* $x_i$ are the incoming inputs. A perceptron can have one or more inputs.
* $w_i$ are the weights being assigned to the respective incoming inputs
* $b$ is a bias term
* $h$ is the sum of the weighted input values + a bias figure
<img src="../../../../images/artificial-neural-network.png", width=450, height=200>
[Top]
Sigmoid
Synonyms:
Logistic function
Summary
A sigmoid function is a mathematical function having an "S" shaped curve (sigmoid curve). Often, sigmoid function refers to the special case of the logistic function.
The sigmoid function is bounded between 0 and 1, and as an output can be interpreted as a probability for success.
Formula
\begin{equation}
\text{sigmoid}(x) =
\frac{1} {1 + e^{-x}}
\end{equation}
\begin{equation}
\text{logistic}(x) =
\frac{L} {1 + e^{-k(x - x_0)}}
\end{equation}
where:
* $L$ = the curve's maximum value
* $e$ = the natural logarithm base (also known as Euler's number)
* $x_0$ = the x-value of the sigmoid's midpoint
* $k$ = the steepness of the curve
Network output from activation
\begin{equation}
\text{output} = a = f(h) = \text{sigmoid}(\Sigma_i w_i x_i + b)
\end{equation}
[Top]
Code
End of explanation
x = np.linspace(start=-10, stop=11, num=100)
y = sigmoid(x)
upper_bound = np.repeat([1.0,], len(x))
success_threshold = np.repeat([0.5,], len(x))
lower_bound = np.repeat([0.0,], len(x))
plt.plot(
# upper bound
x, upper_bound, 'w--',
# success threshold
x, success_threshold, 'w--',
# lower bound
x, lower_bound, 'w--',
# sigmoid
x, y
)
plt.grid(False)
plt.xlabel(r'$x$')
plt.ylabel(r'Probability of success')
plt.title('Sigmoid Function Example')
plt.show()
Explanation: [Top]
Example
End of explanation
inputs = np.array([2.1, 1.5,])
weights = np.array([0.2, 0.5,])
bias = -0.2
output = np.tanh(np.dot(weights, inputs) + bias)
print(output)
Explanation: [Top]
Tanh
Synonyms:
Hyperbolic tangent
Summary
Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola.
The tanh function is bounded between -1 and 1, and as an output can be interpreted as a probability for success, where the output value:
* 1 = 100%
* 0 = 50%
* -1 = 0%
The tanh function creates stronger gradients around zero, and therefore the derivatives are higher than the sigmoid function. Why this is important can apparently be found in Effecient Backprop by LeCun et al (1998). Also see this answer on Cross-Validated for a representation of the derivative values.
Formula
\begin{equation}
\text{tanh}(x) =
\frac{2} {1 + e^{-2x}}
- 1
\end{equation}
\begin{equation}
\text{tanh}(x) =
\frac{\text{sinh}(x)} {\text{cosh}(x)}
\end{equation}
where:
* $e$ = the natural logarithm base (also known as Euler's number)
* $sinh$ is the hyperbolic sine
* $cosh$ is the hyperbolic cosine
[Top]
Tanh Alternative Formula
\begin{equation}
\text{modified tanh}(x) =
\text{1.7159 tanh } \left(\frac{2}{3}x\right)
\end{equation}
Network output from activation
\begin{equation}
\text{output} = a = f(h) = \text{tanh}(\Sigma_i w_i x_i + b)
\end{equation}
[Top]
Code
End of explanation
x = np.linspace(start=-10, stop=11, num=100)
y = np.tanh(x)
upper_bound = np.repeat([1.0,], len(x))
success_threshold = np.repeat([0.0,], len(x))
lower_bound = np.repeat([-1.0,], len(x))
plt.plot(
# upper bound
x, upper_bound, 'w--',
# success threshold
x, success_threshold, 'w--',
# lower bound
x, lower_bound, 'w--',
# sigmoid
x, y
)
plt.grid(False)
plt.xlabel(r'$x$')
plt.ylabel(r'Probability of success (0.00 = 50%)')
plt.title('Tanh Function Example')
plt.show()
Explanation: [Top]
Example
End of explanation
def modified_tanh(x):
return 1.7159 * np.tanh((2 / 3) * x)
x = np.linspace(start=-10, stop=11, num=100)
y = modified_tanh(x)
upper_bound = np.repeat([1.75,], len(x))
success_threshold = np.repeat([0.0,], len(x))
lower_bound = np.repeat([-1.75,], len(x))
plt.plot(
# upper bound
x, upper_bound, 'w--',
# success threshold
x, success_threshold, 'w--',
# lower bound
x, lower_bound, 'w--',
# sigmoid
x, y
)
plt.grid(False)
plt.xlabel(r'$x$')
plt.ylabel(r'Probability of success (0.00 = 50%)')
plt.title('Alternative Tanh Function Example')
plt.show()
Explanation: [Top]
Alternative Example
End of explanation
def softmax(X):
assert len(X.shape) == 2
s = np.max(X, axis=1)
s = s[:, np.newaxis] # necessary step to do broadcasting
e_x = np.exp(X - s)
div = np.sum(e_x, axis=1)
div = div[:, np.newaxis] # dito
return e_x / div
X = np.array([[1, 2, 3, 6],
[2, 4, 5, 6],
[3, 8, 7, 6]])
y = softmax(X)
y
# compared to tensorflow implementation
batch = np.asarray([[1,2,3,6], [2,4,5,6], [3, 8, 7, 6]])
x = tf.placeholder(tf.float32, shape=[None, 4])
y = tf.nn.softmax(x)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(y, feed_dict={x: batch})
Explanation: [Top]
Softmax
Synonyms:
Normalized exponential function
Multinomial logistic regression
Summary
Softmax regression is interested in multi-class classification (as opposed to only binary classification when using the sigmoid and tanh functions), and so the label $y$ can take on $K$ different values, rather than only two.
Is often used as the output layer in multilayer perceptrons to allow non-linear relationships to be learnt for multiclass problems.
Formula
From Deep Learning Book - Chapter 4: Numerical Computation
\begin{equation}
\text{softmax}(x)i =
\frac{\text{exp}(x_i)} {\sum{j=1}^n \text{exp}(x_j)}
\end{equation}
Network output from activation (incomplete)
\begin{equation}
\text{output} = a = f(h) = \text{softmax}()
\end{equation}
[Top]
Code
Link for a good discussion on SO regarding Python implementation of this function, from which the code below code was taken from.
End of explanation
# Defining the sigmoid function for activations
def sigmoid(x):
return 1 / ( 1 + np.exp(-x))
# Derivative of the sigmoid function
def sigmoid_prime(x):
return sigmoid(x) * (1 - sigmoid(x))
x = np.array([0.1, 0.3])
y = 0.2
weights = np.array([-0.8, 0.5])
# probably use a vector named "w" instead of a name like this
# to make code look more like algebra
# The learning rate, eta in the weight step equation
learnrate = 0.5
# The neural network output
nn_output = sigmoid(x[0] * weights[0] + x[1] * weights[1])
# or nn_output = sigmoid(np.dot(weights, x))
# output error
error = y - nn_output
# error gradient
error_gradient = error * sigmoid_prime(np.dot(x, weights))
# sigmoid_prime(x) is equal to -> nn_output * (1 - nn_output)
# Gradient descent step
del_w = [ learnrate * error_gradient * x[0],
learnrate * error_gradient * x[1]]
# or del_w = learnrate * error_gradient * x
Explanation: [Top]
Gradient Descent
Learning weights
What if you want to perform an operation, such as predicting college admission, but don't know the correct weights? You'll need to learn the weights from example data, then use those weights to make the predictions.
We need a metric of how wrong the predictions are, the error.
Sum of squared errors (SSE)
\begin{equation}
E =
\frac{1}{2} \Sigma_u \Sigma_j \left [ y_j ^ \mu - \hat y_j^ \mu \right ] ^ 2
\end{equation}
where (neural network prediction):
\begin{equation}
\hat y_j^\mu =
f \left(\Sigma_i w_{ij} x_i^\mu\right)
\end{equation}
therefore:
\begin{equation}
E =
\frac{1}{2} \Sigma_u \Sigma_j \left [ y_j ^ \mu - f \left(\Sigma_i w_{ij} x_i^\mu\right) \right ] ^ 2
\end{equation}
Goal
Find weights $w_{ij}$ that minimize the squared error $E$.
How? Gradient descent.
[Top]
Gradient Descent Formula
\begin{equation}
\Delta w_{ij} = \eta (y_j - \hat y_j) f^\prime (h_j) x_i
\end{equation}
remembering $h_j$ is the input to the output unit $j$:
\begin{equation}
h = \sum_i w_{ij} x_i
\end{equation}
where:
* $(y_j - \hat y_j)$ is the prediction error.
* The larger this error is, the larger the gradient descent step should be.
* $f^\prime (h_j)$ is the gradient
* If the gradient is small, then a change in the unit input $h_j$ will have a small effect on the error.
* This term produces larger gradient descent steps for units that have larger gradients
The errors can be rewritten as:
\begin{equation}
\delta_j = (y_j - \hat y_j) f^\prime (h_j)
\end{equation}
Giving the gradient step as:
\begin{equation}
\Delta w_{ij} = \eta \delta_j x_i
\end{equation}
where:
* $\Delta w_{ij}$ is the (delta) change to the $i$th $j$th weight
* $\eta$ (eta) is the learning rate
* $\delta_j$ (delta j) is the prediction errors
* $x_i$ is the input
[Top]
Algorithm
Set the weight step to zero: $\Delta w_i = 0$
For each record in the training data:
Make a forward pass through the network, calculating the output $\hat y = f(\Sigma_i w_i x_i)$
Calculate the error gradient in the output unit, $\delta = (y − \hat y) f^\prime(\Sigma_i w_i x_i)$
Update the weight step $\Delta w_i= \Delta w_i + \delta x_i$
Update the weights $w_i = w_i + \frac{\eta \Delta w_i} {m}$ where:
η is the learning rate
$m$ is the number of records
Here we're averaging the weight steps to help reduce any large variations in the training data.
Repeat for $e$ epochs.
[Top]
Code
End of explanation
# network size is a 4x3x2 network
n_input = 4
n_hidden = 3
n_output = 2
# make some fake data
np.random.seed(42)
x = np.random.randn(4)
weights_in_hidden = np.random.normal(0, scale=0.1, size=(n_input, n_hidden))
weights_hidden_out = np.random.normal(0, scale=0.1, size=(n_hidden, n_output))
print('x shape\t\t\t= {}'.format(x.shape))
print('weights_in_hidden shape\t= {}'.format(weights_in_hidden.shape))
print('weights_hidden_out\t= {}'.format(weights_hidden_out.shape))
Explanation: [Top]
Caveat
Gradient descent is reliant on beginnning weight values. If incorrect could result in convergergance occuring in a local minima, not a global minima. Random weights can be used.
Momentum Term
The momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation.
[Top]
Multilayer Perceptrons
Synonyms
MLP (just an acronym)
Numpy column vector
Numpy arays are row vectors by default, and the input_features.T (transpose) transform still leaves it as a row vector. Instead we have to use (use this one, makes more sense):
input_features = input_features[:, None]
Alternatively you can create an array with two dimensions then transpose it:
input_features = np.array(input_features ndim=2).T
[Top]
Code example setting up a MLP
End of explanation |
15,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# Input layer
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer
net = tflearn.fully_connected(net, 800, activation='ReLU')
net = tflearn.fully_connected(net, 200, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
15,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mapping utilities and options
This notebook illustrate how to map SuperDARN radars and FoVs
Step1: Plot all radars in AACGM coordinates
Be patient, this takes a few seconds (so many radars, not to mention the coordinate calculatsions)
Step2: Plot all radars in geographic coordinates
This is a bit faster (but there are still lots of radars)
Step3: Plot a single radar, highlight beams
Still a bit slow due to aacgm coordinates
Step4: Plot a nice view of the mid-latitude radars
Step5: Plot the RBSP mode
This is sloooooooow... | Python Code:
%pylab inline
from davitpy.pydarn.radar import *
from davitpy.pydarn.plotting import *
from davitpy.utils import *
import datetime as dt
Explanation: Mapping utilities and options
This notebook illustrate how to map SuperDARN radars and FoVs
End of explanation
figure(figsize=(15,10))
# Plot map
subplot(121)
m1 = plotUtils.mapObj(boundinglat=30., gridLabels=True, coords='mag')
overlayRadar(m1, fontSize=8, plot_all=True, markerSize=5)
subplot(122)
m2 = plotUtils.mapObj(boundinglat=-30., gridLabels=True, coords='mag')
overlayRadar(m2, fontSize=8, plot_all=True, markerSize=5)
Explanation: Plot all radars in AACGM coordinates
Be patient, this takes a few seconds (so many radars, not to mention the coordinate calculatsions)
End of explanation
figure(figsize=(15,10))
# Plot map
subplot(121)
m1 = plotUtils.mapObj(boundinglat=30., gridLabels=False)
overlayRadar(m1, fontSize=8, plot_all=True, markerSize=5)
subplot(122)
m2 = plotUtils.mapObj(boundinglat=-30., gridLabels=False)
overlayRadar(m2, fontSize=8, plot_all=True, markerSize=5)
Explanation: Plot all radars in geographic coordinates
This is a bit faster (but there are still lots of radars)
End of explanation
# Set map
figure(figsize=(10,10))
width = 111e3*40
m = plotUtils.mapObj(width=width, height=width, lat_0=60., lon_0=-30, coords='mag')
code = 'bks'
# Plotting some radars
overlayRadar(m, fontSize=12, codes=code)
# Plot radar fov
overlayFov(m, codes=code, maxGate=75, beams=[0,4,7,8,23])
Explanation: Plot a single radar, highlight beams
Still a bit slow due to aacgm coordinates
End of explanation
# Set map
fig = figure(figsize=(10,10))
m = plotUtils.mapObj(lat_0=70., lon_0=-60, width=111e3*120, height=111e3*55, coords='mag')
codes = ['wal','fhe','fhw','cve','cvw','hok','ade','adw','bks']
# Plotting some radars
overlayRadar(m, fontSize=12, codes=codes)
# Plot radar fov
overlayFov(m, codes=codes[:-1], maxGate=70)#, fovColor=(.8,.9,.9))
overlayFov(m, codes=codes[-1], maxGate=70, fovColor=(.8,.7,.8), fovAlpha=.5)
fig.tight_layout(pad=2)
rcParams.update({'font.size': 12})
Explanation: Plot a nice view of the mid-latitude radars
End of explanation
# Set map
figure(figsize=(8,8))
lon_0 = -70.
m = plotUtils.mapObj(boundinglat=35., lon_0=lon_0)
# Go through each radar
codes = ['gbr','kap','sas','pgr', \
'kod','sto','pyk','han', \
'ksr','cve','cvw','wal', \
'bks','hok','fhw','fhe', \
'inv','rkn']
beams = [[3,4,6],[10,11,13],[2,3,5],[12,13,15], \
[2,3,5],[12,13,15],[0,1,3],[5,6,8], \
[12,13,15],[0,1,3],[19,20,22],[0,1,3], \
[12,13,15],[0,1,3],[18,19,21],[0,1,3],\
[6,7,9],[6,7,9]]
for i,rad in enumerate(codes):
# Plot radar
overlayRadar(m, fontSize=12, codes=rad)
# Plot radar fov
overlayFov(m, codes=rad, maxGate=75, beams=beams[i])
#savefig('rbsp_beams.pdf')
Explanation: Plot the RBSP mode
This is sloooooooow...
End of explanation |
15,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
Step1: 연습문제 5.1
BRFSS 데이터셋에서 (5.4절 참조), 신장 분포는 대략 남성에 대해 모수 µ = 178 cm, σ = 7.7cm을 갖는 정규분포이며, 여성에 대해서 µ = 163 cm, σ = 7.3 cm 을 갖는다.
블루맨 그룹에 가입하기 위해서, 남성은 5’10”에서 6’1”사이여야 된다. (http
Step2: 예를 들어, <tt>scipy.stats.norm</tt>은 정규분포를 나타낸다.
Step3: "고정된 확률변수(frozen random variable)"는 평균과 표준편차를 계산할 수 있다.
Step4: CDF도 평가할 수 있다. 평균아래 1 표준편차를 벗어난 사람은 얼마나 될까? 약 16%
Step5: 5'10"과 6'1" 사이 얼마나 많은 사람이 분포하고 있는가?
연습문제 5.2
파레토 분포에 대한 감각을 갖기 위해서, 만약 사람 신장의 분포가 파레토라면 세상이 얼마나 달라지는지 살펴보자. $x_m = 1$ m, $α = 1.7$ 모수로, 일리있는 최소값 1 m, 중위수 1.5 m 를 갖는 분포가 된다.
상기 분포를 도식화하세요. 파레토 세상에서 평균 사람 신장은 얼마인가? 인구의 얼마나 평균보다 더 적은가? 만약 파레토 세상에 70억 사람이 있다면, 얼마나 많은 사람들이 1 km 보다 더 클 것으로 예상하는가? 가장 작은 사람은 신장이 얼마나 될 것으로 예상하는가?
<tt>scipy.stats.pareto</tt>는 파레토 분포를 나타낸다. 파레토 세상에서, 사람 키 분포는 모수 $x_m = 1$ m, $α = 1.7$을 갖는다. 그래서 가장 키가 작은 사람은 100 cm이고, 중위수는 150cm이다.
Step6: 파레토 세상에서 평균신장이 얼마인가?
평균보다 더 키가 작은 사람의 비율은 얼마나 될까?
70억 사람중에서, 1 km 보다 더 키가 클 것으로 예상되는 사람은 얼마나 될까? <tt>dist.cdf</tt> 혹은 <tt>dist.sf</tt>을 사용한다.
가장 키가 큰 사람은 얼마나 키가 클 것으로 예상되는가? 힌트 | Python Code:
%matplotlib inline
%run chap06soln.py
Explanation: 통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
End of explanation
import scipy.stats
Explanation: 연습문제 5.1
BRFSS 데이터셋에서 (5.4절 참조), 신장 분포는 대략 남성에 대해 모수 µ = 178 cm, σ = 7.7cm을 갖는 정규분포이며, 여성에 대해서 µ = 163 cm, σ = 7.3 cm 을 갖는다.
블루맨 그룹에 가입하기 위해서, 남성은 5’10”에서 6’1”사이여야 된다. (http://bluemancasting.com 참조). US 남성 인구의 몇 퍼센티지가 해당 범위에 있을까? 힌트: scipy.stats.norm.cdf를 사용하라.
<tt>scipy.stats</tt> 모듈은 해석분포(analytic distributions)를 나타내는 객체를 담고 있다.
End of explanation
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
Explanation: 예를 들어, <tt>scipy.stats.norm</tt>은 정규분포를 나타낸다.
End of explanation
dist.mean(), dist.std()
Explanation: "고정된 확률변수(frozen random variable)"는 평균과 표준편차를 계산할 수 있다.
End of explanation
dist.cdf(mu-sigma)
Explanation: CDF도 평가할 수 있다. 평균아래 1 표준편차를 벗어난 사람은 얼마나 될까? 약 16%
End of explanation
alpha = 1.7
xmin = 1
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r'$\alpha=%g$' % alpha)
thinkplot.Config(xlabel='height (m)', ylabel='CDF')
Explanation: 5'10"과 6'1" 사이 얼마나 많은 사람이 분포하고 있는가?
연습문제 5.2
파레토 분포에 대한 감각을 갖기 위해서, 만약 사람 신장의 분포가 파레토라면 세상이 얼마나 달라지는지 살펴보자. $x_m = 1$ m, $α = 1.7$ 모수로, 일리있는 최소값 1 m, 중위수 1.5 m 를 갖는 분포가 된다.
상기 분포를 도식화하세요. 파레토 세상에서 평균 사람 신장은 얼마인가? 인구의 얼마나 평균보다 더 적은가? 만약 파레토 세상에 70억 사람이 있다면, 얼마나 많은 사람들이 1 km 보다 더 클 것으로 예상하는가? 가장 작은 사람은 신장이 얼마나 될 것으로 예상하는가?
<tt>scipy.stats.pareto</tt>는 파레토 분포를 나타낸다. 파레토 세상에서, 사람 키 분포는 모수 $x_m = 1$ m, $α = 1.7$을 갖는다. 그래서 가장 키가 작은 사람은 100 cm이고, 중위수는 150cm이다.
End of explanation
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
thinkplot.Cdf(cdf, complement=True)
thinkplot.Config(yscale='log')
Explanation: 파레토 세상에서 평균신장이 얼마인가?
평균보다 더 키가 작은 사람의 비율은 얼마나 될까?
70억 사람중에서, 1 km 보다 더 키가 클 것으로 예상되는 사람은 얼마나 될까? <tt>dist.cdf</tt> 혹은 <tt>dist.sf</tt>을 사용한다.
가장 키가 큰 사람은 얼마나 키가 클 것으로 예상되는가? 힌트: 한 사람에 대한 신장을 찾아본다.
연습문제 5.3
와이불 분포(Weibull distribution)는 고장 분석에서 나오는 지수분포를 일반화한 것이다 (http://wikipedia.org/wiki/Weibull_distribution 참조). CDF는 다음과 같다.
$CDF(x) = 1 − \exp(−(x / λ)^k)$
와이블 분포를 직선처럼 보이게 하는 변환을 찾을 수 있을까요? 직선의 기울기와 절편은 무엇을 나타내는가?
<tt>random.weibullvariate</tt>을 사용해서 와이블 분포에서 표본을 생성하시오. 그리고 이를 사용해서 분포를 테스트하시오.
연습문제 5.4
n의 적은 값으로, 정확하게 해석 분포에 적합되는 경험분포를 기대하지 못한다. 적합 품질을 평가하는 한 방법이 해석적 분포로부터 표본을 생성하고 데이터와 얼마나 잘 매칭되는지 살펴보는 것이다.
예를 들어, 5.1 절에서, 출생사이 시간 분포를 도식화했고 근사적으로 지수분포라는 것을 보았다. 하지만, 분포는 단지 데이터가 44에 불과하다. 데이터가 지수분포에서 나왔는지 살펴보기 위해서, 출생사이 약 33분인, 데이터와 동일한 평균을 갖는 지수분포에서 데이터를 44개 생성한다.
임의 확률 표본의 분포를 도식화하고, 실제 분포와 비교한다. random.expovariate 을 사용해서 값을 생성한다.
End of explanation |
15,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CTA 1DC background energy distribution
This is my attempt to understand the background event energy distribution for CTA 1DC simulated data.
See https
Step1: Actual distribution of events
We load some event data for South_z20_50h, select only background events and compute the n_obs distribution of observed events in log(energy) and offset bins.
Step2: Expected distribution from background model
In this section, we load the FITS background model and compute the n_pred distribution of number of predicted backgorund events in log(energy) and offset bins (same histogram binning as for n_obs above). | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
from astropy.table import Table, vstack
from gammapy.data import DataStore
# Parameters used throughout this notebook
CTADATA = '/Users/deil/work/cta-dc/data/1dc/1dc/'
irf_name = 'South_z20_50h'
n_obs = 10 # Number of observations to use for events
# Histogram binning
energy_bins = 10 ** np.linspace(-2, 2, 100)
offset_bins = np.arange(0, 5, 0.1)
Explanation: CTA 1DC background energy distribution
This is my attempt to understand the background event energy distribution for CTA 1DC simulated data.
See https://forge.in2p3.fr/boards/236/topics/1824?r=2057#message-2057
To understand it it's crucial to also look at event offset in the field of view, so we'll do that.
Also, we'll just look at one IRF: South_z20_50h.
(presumably the effects will be very similar for the others as well, left as an exercise for the reader)
End of explanation
data_store = DataStore.from_dir(CTADATA + 'index/all')
# data_store.info()
mask = data_store.obs_table['IRF'] == irf_name
obs_ids = data_store.obs_table[mask]['OBS_ID'][:n_obs].data
# %%time
# Make one table with all background events (MD_ID == 1)
# only keeping the info we need (DETX, DETY, ENERGY)
tables = []
for obs_id in obs_ids:
t = data_store.obs(obs_id).events.table
t = t[t['MC_ID'] == 1]
t = t[['ENERGY', 'DETX', 'DETY']]
tables.append(t)
table = vstack(tables, metadata_conflicts='silent')
table['OFFSET'] = np.sqrt(table['DETX'] ** 2 + table['DETY'] ** 2)
print(len(table))
# Compute energy-offset histogram n_obs (number of observed events)
n_obs = np.histogram2d(
x=table['ENERGY'], y=table['OFFSET'],
bins=(energy_bins, offset_bins),
)[0]
val = n_obs / n_obs.max()
norm = colors.LogNorm()
plt.pcolormesh(energy_bins, offset_bins, val.T, norm=norm)
plt.semilogx()
plt.colorbar()
plt.xlabel('Energy (TeV)')
plt.ylabel('Offset (deg)')
Explanation: Actual distribution of events
We load some event data for South_z20_50h, select only background events and compute the n_obs distribution of observed events in log(energy) and offset bins.
End of explanation
from gammapy.data import
irf_file = CTADATA + '/caldb/data/cta/1dc/bcf/' + irf_name + '/irf_file.fits'
table = Table.read(irf_file, hdu='BACKGROUND')
table
# Columns:
# BGD float32 (21, 36, 36) 1/s/MeV/sr
# ENERG_LO float32 (21,) TeV
# DETX_LO float32 (36,) deg
# DETY_LO float32 (36,) deg
# dety = table['DETY_LO'].data.squeeze()[18]
# print(dety) # this shows dety == 0.0
# for idx_detx in [18, 21, 22, 23, 24, 25, 26, 27]:
# detx = table['DETX_LO'].data.squeeze()[idx_detx]
# energy = table['ENERG_LO'].data.squeeze()
# bkg = table['BGD'].data.squeeze()[:, idx_detx, 18]
# val = bkg * energy # this is to account for equal-width log-energy bins.
# val /= val.sum()
# txt = f'offset={detx:.1f}'
# plt.plot(energy, val, label=txt)
plt.pcolormesh??
Explanation: Expected distribution from background model
In this section, we load the FITS background model and compute the n_pred distribution of number of predicted backgorund events in log(energy) and offset bins (same histogram binning as for n_obs above).
End of explanation |
15,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Doc2Vec Tutorial on the Lee Dataset
Step1: What is it?
Doc2Vec is an NLP tool for representing documents as a vector and is a generalizing of the Word2Vec method. This tutorial will serve as an introduction to Doc2Vec and present ways to train and assess a Doc2Vec model.
Resources
Word2Vec Paper
Doc2Vec Paper
Dr. Michael D. Lee's Website
Lee Corpus
IMDB Doc2Vec Tutorial
Getting Started
To get going, we'll need to have a set of documents to train our doc2vec model. In theory, a document could be anything from a short 140 character tweet, a single paragraph (i.e., journal article abstract), a news article, or a book. In NLP parlance a collection or set of documents is often referred to as a <b>corpus</b>.
For this tutorial, we'll be training our model using the Lee Background Corpus included in gensim. This corpus contains 314 documents selected from the Australian Broadcasting
Corporation’s news mail service, which provides text e-mails of headline stories and covers a number of broad topics.
And we'll test our model by eye using the much shorter Lee Corpus which contains 50 documents.
Step2: Define a Function to Read and Preprocess Text
Below, we define a function to open the train/test file (with latin encoding), read the file line-by-line, pre-process each line using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Note that, for a given file (aka corpus), each continuous line constitutes a single document and the length of each line (i.e., document) can vary. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.
Step3: Let's take a look at the training corpus
Step4: And the testing corpus looks like this
Step5: Notice that the testing corpus is just a list of lists and does not contain any tags.
Training the Model
Instantiate a Doc2Vec Object
Now, we'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 10 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time.
Step6: Build a Vocabulary
Step7: Essentially, the vocabulary is a dictionary (accessible via model.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.vocab['penalty'].count for counts for the word penalty).
Time to Train
If the BLAS library is being used, this should take no more than 2 seconds.
If the BLAS library is not being used, this should take no more than 2 minutes, so use BLAS if you value your time.
Step8: Inferring a Vector
One important thing to note is that you can now infer a vector for any piece of text without having to re-train the model by passing a list of words to the model.infer_vector function. This vector can then be compared with other vectors via cosine similarity.
Step9: Assessing Model
To assess our new model, we'll first infer new vectors for each document of the training corpus, compare the inferred vectors with the training corpus, and then returning the rank of the document based on self-similarity. Basically, we're pretending as if the training corpus is some new unseen data and then seeing how they compare with the trained model. The expectation is that we've likely overfit our model (i.e., all of the ranks will be less than 2) and so we should be able to find similar documents very easily. Additionally, we'll keep track of the second ranks for a comparison of less similar documents.
Step10: Let's count how each document ranks with respect to the training corpus
Step11: Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. This is great and not entirely surprising. We can take a look at an example
Step12: Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself
Step13: Testing the Model
Using the same approach above, we'll infer the vector for a randomly chosen test document, and compare the document to our model by eye. | Python Code:
import gensim
import os
import collections
import random
Explanation: Doc2Vec Tutorial on the Lee Dataset
End of explanation
# Set file names for train and test data
test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])
lee_train_file = test_data_dir + os.sep + 'lee_background.cor'
lee_test_file = test_data_dir + os.sep + 'lee.cor'
Explanation: What is it?
Doc2Vec is an NLP tool for representing documents as a vector and is a generalizing of the Word2Vec method. This tutorial will serve as an introduction to Doc2Vec and present ways to train and assess a Doc2Vec model.
Resources
Word2Vec Paper
Doc2Vec Paper
Dr. Michael D. Lee's Website
Lee Corpus
IMDB Doc2Vec Tutorial
Getting Started
To get going, we'll need to have a set of documents to train our doc2vec model. In theory, a document could be anything from a short 140 character tweet, a single paragraph (i.e., journal article abstract), a news article, or a book. In NLP parlance a collection or set of documents is often referred to as a <b>corpus</b>.
For this tutorial, we'll be training our model using the Lee Background Corpus included in gensim. This corpus contains 314 documents selected from the Australian Broadcasting
Corporation’s news mail service, which provides text e-mails of headline stories and covers a number of broad topics.
And we'll test our model by eye using the much shorter Lee Corpus which contains 50 documents.
End of explanation
def read_corpus(fname, tokens_only=False):
with open(fname, encoding="iso-8859-1") as f:
for i, line in enumerate(f):
if tokens_only:
yield gensim.utils.simple_preprocess(line)
else:
# For training data, add tags
yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(line), [i])
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))
Explanation: Define a Function to Read and Preprocess Text
Below, we define a function to open the train/test file (with latin encoding), read the file line-by-line, pre-process each line using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Note that, for a given file (aka corpus), each continuous line constitutes a single document and the length of each line (i.e., document) can vary. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.
End of explanation
train_corpus[:2]
Explanation: Let's take a look at the training corpus
End of explanation
print(test_corpus[:2])
Explanation: And the testing corpus looks like this:
End of explanation
model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=10)
Explanation: Notice that the testing corpus is just a list of lists and does not contain any tags.
Training the Model
Instantiate a Doc2Vec Object
Now, we'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 10 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time.
End of explanation
model.build_vocab(train_corpus)
Explanation: Build a Vocabulary
End of explanation
%time model.train(train_corpus)
Explanation: Essentially, the vocabulary is a dictionary (accessible via model.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.vocab['penalty'].count for counts for the word penalty).
Time to Train
If the BLAS library is being used, this should take no more than 2 seconds.
If the BLAS library is not being used, this should take no more than 2 minutes, so use BLAS if you value your time.
End of explanation
model.infer_vector(['only', 'you', 'can', 'prevent', 'forrest', 'fires'])
Explanation: Inferring a Vector
One important thing to note is that you can now infer a vector for any piece of text without having to re-train the model by passing a list of words to the model.infer_vector function. This vector can then be compared with other vectors via cosine similarity.
End of explanation
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
inferred_vector = model.infer_vector(train_corpus[doc_id].words)
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
second_ranks.append(sims[1])
Explanation: Assessing Model
To assess our new model, we'll first infer new vectors for each document of the training corpus, compare the inferred vectors with the training corpus, and then returning the rank of the document based on self-similarity. Basically, we're pretending as if the training corpus is some new unseen data and then seeing how they compare with the trained model. The expectation is that we've likely overfit our model (i.e., all of the ranks will be less than 2) and so we should be able to find similar documents very easily. Additionally, we'll keep track of the second ranks for a comparison of less similar documents.
End of explanation
collections.Counter(ranks) #96% accuracy
Explanation: Let's count how each document ranks with respect to the training corpus
End of explanation
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
Explanation: Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. This is great and not entirely surprising. We can take a look at an example:
End of explanation
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(train_corpus))
# Compare and print the most/median/least similar documents from the train corpus
print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: «{}»\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))
Explanation: Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself
End of explanation
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus))
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))
# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): «{}»\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
Explanation: Testing the Model
Using the same approach above, we'll infer the vector for a randomly chosen test document, and compare the document to our model by eye.
End of explanation |
15,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to the Mandelbrot Set
First, a boring example
A sequence of numbers can be defined by a map. For example, consider the simple map
$$f
Step1: Interesting! To explore this more, lets make a function that generates the sequence itself.
We'll supply two numbers | Python Code:
def f(x, c):
return x**2 + c
x = 0.0
for i in range(10):
print(i, x)
x = f(x, 1)
x = 0.0
for i in range(10):
print(i, x)
x = f(x, -1)
x = 0.0
for i in range(10):
print(i, x)
x = f(x, 0.1)
Explanation: Introduction to the Mandelbrot Set
First, a boring example
A sequence of numbers can be defined by a map. For example, consider the simple map
$$f: x \rightarrow x + 1.$$
If we start from $x_0=0$, this generates the sequence
$$0, 1, 2, 3, 4, 5, 6,\ldots.$$
Note that the sequence is unbounded; it diverges to positive infinity.
Taking different values of the addened $+1$ for this example is not much more interesting.
For example, the map
$$f: x \rightarrow x - 1.$$
generates the unbounded sequence
$$0, -1, -2, -3, -4, -5, -6,\ldots,$$
which diverges to negative infinity.
It is pretty easy to see that any function $f(x) = x + c$ will not be any more interesting.
All sequences will diverge to either positive or negative infinity (depending on the sign of $c$),
while the case $c=0$ is the identity map, and generates the (extremely!) boring sequence
$$0, 0, 0, 0, 0, 0, 0,\ldots.$$
A much more interesting example
Simply squaring the argument in the function makes in non-linear,
$$f: x \rightarrow x^2 + c,$$
and things get much more interesting.
For example, with $c=1$, we have the map
$$f: x \rightarrow x^2 + 1,$$
which generates the sequence
$$0, 1, 2, 5, 26, \ldots,$$
which diverges.
Taking $c=-1$, the map is $f(x)=x^2-1$, which generates the sequence
$$0, -1, 0, -1, 0, -1, \ldots,$$
which stays bounded.
Now it is helpful to use python to evaluate these sequences.
End of explanation
def make_sequence(c, n):
x = 0.0
sequence = [x]
for i in range(n):
x = f(x, c)
sequence.append(x)
return sequence
make_sequence(1, 10)
make_sequence(-1, 10)
make_sequence(-2, 10)
make_sequence(-2.01, 10)
make_sequence(0.25, 10)
Explanation: Interesting! To explore this more, lets make a function that generates the sequence itself.
We'll supply two numbers: the value of the constant $c$ and the number of terms in the sequence.
End of explanation |
15,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3D Plots, in Python
(Easy version, without the "np" and "plt" namespaces.)
We first load in the toolboxes for numerical python and plotting.
Note the "import * " command will bring in the functions without any namespace labels.
Step1: Plotting a simple surface.
Let's plot the hyperbola $z = x^2 - y^2$.
The 3D plotting commands expect arrays as entries, so we create a mesh grid from linear variables $x$ and $y$, resulting in arrays $X$ and $Y$. We then compute $Z$ as a grid (array).
Step2: I don't know how to simply plot in matplotlib.
Instead, we have three steps
- create a figure
- indicated that the figure will be in 3D
- then send the plot_surface command to the figure asix.
Step3: Wireframe plots
Use the wireframe command. Note we can adjust the separation between lines, using the stride paameters.
Step4: Subplots
To make two plots, side-by-side, you make one figure and add two subplots. (I'm reusing the object label "ax" in the code here.)
Step5: Parameterized surfaces
A parameterized surfaces expresses the spatial variable $x,y,z$ as a function of two independent parameters, say $u$ and $v$.
Here we plot a sphere. Use the usual spherical coordinates.
$$x = \cos(u)\sin(v) $$
$$y = \sin(u)\sin(v) $$
$$z = \cos(v) $$
with appropriate ranges for $u$ and $v$. We set up the array variables as follows
Step6: Outer product for speed
Python provides an outer product, which makes it easy to multiply the $u$ vector by the $v$ vectors, to create the 2D array of grid values. This is sometime useful for speed, so you may see it in other's people's code when they really need the speed. Here is an example.
Step7: A donut
Let's plot a torus. The idea is to start with a circle
$$ x_0 = \cos(u) $$
$$ y_0 = \sin(u) $$
$$ z_0 = 0$$
then add a little circle perpendicular to it
$$ (x_0,y_0,0)\cos(v) + (0,0,1)\sin(v) = (\cos(u)\cos(v), \sin(u)\cos(v), \sin(v)).$$
Add them, with a scaling. | Python Code:
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
from mpl_toolkits.mplot3d import Axes3D
Explanation: 3D Plots, in Python
(Easy version, without the "np" and "plt" namespaces.)
We first load in the toolboxes for numerical python and plotting.
Note the "import * " command will bring in the functions without any namespace labels.
End of explanation
# Make data
x = linspace(-2, 2, 100)
y = linspace(-2, 2, 100)
X, Y = meshgrid(x, y)
Z = X**2 - Y**2
Explanation: Plotting a simple surface.
Let's plot the hyperbola $z = x^2 - y^2$.
The 3D plotting commands expect arrays as entries, so we create a mesh grid from linear variables $x$ and $y$, resulting in arrays $X$ and $Y$. We then compute $Z$ as a grid (array).
End of explanation
fig = figure()
ax = axes(projection='3d')
ax.plot_surface(X, Y, Z, color='b')
Explanation: I don't know how to simply plot in matplotlib.
Instead, we have three steps
- create a figure
- indicated that the figure will be in 3D
- then send the plot_surface command to the figure asix.
End of explanation
fig = figure()
ax = axes(projection='3d')
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
Explanation: Wireframe plots
Use the wireframe command. Note we can adjust the separation between lines, using the stride paameters.
End of explanation
fig = figure(figsize=figaspect(0.3))
ax = fig.add_subplot(121, projection='3d')
ax.plot_surface(X, Y, Z, color='b')
ax = fig.add_subplot(122, projection='3d')
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
Explanation: Subplots
To make two plots, side-by-side, you make one figure and add two subplots. (I'm reusing the object label "ax" in the code here.)
End of explanation
u = linspace(0, 2*pi, 100)
v = linspace(0, pi, 100)
u,v = meshgrid(u,v)
x = cos(u) * sin(v)
y = sin(u) * sin(v)
z = cos(v)
fig = figure()
ax = axes(projection='3d')
# Plot the surface
ax.plot_surface(x, y, z, color='b')
Explanation: Parameterized surfaces
A parameterized surfaces expresses the spatial variable $x,y,z$ as a function of two independent parameters, say $u$ and $v$.
Here we plot a sphere. Use the usual spherical coordinates.
$$x = \cos(u)\sin(v) $$
$$y = \sin(u)\sin(v) $$
$$z = \cos(v) $$
with appropriate ranges for $u$ and $v$. We set up the array variables as follows:
End of explanation
u = linspace(0, 2*pi, 100)
v = linspace(0, pi, 100)
x = outer(cos(u), sin(v))
y = outer(sin(u), sin(v))
z = outer(ones(size(u)), cos(v))
fig = figure()
ax = axes(projection='3d')
# Plot the surface
ax.plot_surface(x, y, z, color='b')
Explanation: Outer product for speed
Python provides an outer product, which makes it easy to multiply the $u$ vector by the $v$ vectors, to create the 2D array of grid values. This is sometime useful for speed, so you may see it in other's people's code when they really need the speed. Here is an example.
End of explanation
# Make data
u = linspace(0, 2*pi, 100)
v = linspace(0, 2*pi, 100)
u,v = meshgrid(u,v)
R = 10
r = 4
x = R * cos(u) + r*cos(u)*cos(v)
y = R * sin(u) + r*sin(u)*cos(v)
z = r * sin(v)
fig = figure()
ax = axes(projection='3d')
ax.set_xlim([-(R+r), (R+r)])
ax.set_ylim([-(R+r), (R+r)])
ax.set_zlim([-(R+r), (R+r)])
ax.plot_surface(x, y, z, color='c')
Explanation: A donut
Let's plot a torus. The idea is to start with a circle
$$ x_0 = \cos(u) $$
$$ y_0 = \sin(u) $$
$$ z_0 = 0$$
then add a little circle perpendicular to it
$$ (x_0,y_0,0)\cos(v) + (0,0,1)\sin(v) = (\cos(u)\cos(v), \sin(u)\cos(v), \sin(v)).$$
Add them, with a scaling.
End of explanation |
15,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Dream
Deep Dream, or Inceptionism, was introduced by Google in this blogpost. Deep Dream is an algorithm that optimizes an input image so that it maximizes its activations in certain layer(s) of a pretrained network. By this optimization process, different patterns, objects or shapes appear in the image based on what the neurons of the network have previously learned. Here is an example
Step1: We will use the same image for the example.
Step2: Here are the settings we will use, including the layers of the network we want to "dream" and the weights for each loss term.
Step3: We load the pretrained network
Step4: Deep Dream is a gradient ascent process that tries to maximize the L2 norm of activations of certain layer(s) of the network. Let's define the loss
Step5: Some additional loss terms are added to make the image look nicer
Step6: We define the function that will compute the gradients grads of the image in dream_in based on the loss we just defined. This function is the one that will be used iteratively to update the image based on the gradients.
Step7: Let's run it. We will run 5 iterations, in which we will forward the image, compute the gradients based on the loss and apply the gradients to the image.
Step8: We can display the image for the last 5 iterations
Step9: And let's display the final image with higher resolution. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from keras.applications import vgg16
from keras.layers import Input
from dream import *
Explanation: Deep Dream
Deep Dream, or Inceptionism, was introduced by Google in this blogpost. Deep Dream is an algorithm that optimizes an input image so that it maximizes its activations in certain layer(s) of a pretrained network. By this optimization process, different patterns, objects or shapes appear in the image based on what the neurons of the network have previously learned. Here is an example:
<img src="figs/skyarrow.png">
In this exercise we will implement the algorithm in Keras and test it on some example images to see its effect.
End of explanation
from scipy.misc import imread
img_dir = '../images/dream/sky1024px.jpg'
I = imread(img_dir)
plt.imshow(I)
plt.axis('off')
plt.show()
Explanation: We will use the same image for the example.
End of explanation
settings = {'features': {'block5_conv1': 0.05,
'block5_conv2': 0.1},
'continuity': 0.1,
'dream_l2': 0.02}
Explanation: Here are the settings we will use, including the layers of the network we want to "dream" and the weights for each loss term.
End of explanation
from keras.preprocessing.image import load_img
width, height = load_img(img_dir).size
img_height = 224
img_width = int(width * img_height / height)
img_size = (img_height, img_width, 3)
dream_in = Input(batch_shape=(1,) + img_size)
model = vgg16.VGG16(input_tensor=dream_in,weights='imagenet', include_top=False)
Explanation: We load the pretrained network:
End of explanation
# dictionary with all layers
layer_dict = dict([(layer.name, layer) for layer in model.layers])
# define the loss
loss = K.variable(0.)
for layer_name in settings['features']:
assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.'
coeff = settings['features'][layer_name]
x = layer_dict[layer_name].output
shape = layer_dict[layer_name].output_shape
# Maximize L2 norm of activations: loss is -activations
# we avoid border artifacts by only involving non-border pixels in the loss
loss -= coeff * K.sum(K.square(x[:, 2: shape[1] - 2, 2: shape[2] - 2, :])) / np.prod(shape[1:])
Explanation: Deep Dream is a gradient ascent process that tries to maximize the L2 norm of activations of certain layer(s) of the network. Let's define the loss:
End of explanation
# add continuity loss (gives image local coherence, can result in an artful blur)
loss += settings['continuity'] * continuity_loss(dream_in,img_height, img_width) / np.prod(img_size)
# add image L2 norm to loss (prevents pixels from taking very high values, makes image darker)
loss += settings['dream_l2'] * K.sum(K.square(dream_in)) / np.prod(img_size)
Explanation: Some additional loss terms are added to make the image look nicer:
End of explanation
# compute the gradients of the dream wrt the loss
grads = K.gradients(loss, dream_in)
outputs = [loss]
if isinstance(grads, (list, tuple)):
outputs += grads
else:
outputs.append(grads)
f_outputs = K.function([dream_in], outputs)
Explanation: We define the function that will compute the gradients grads of the image in dream_in based on the loss we just defined. This function is the one that will be used iteratively to update the image based on the gradients.
End of explanation
import time
evaluator = Evaluator(img_size,f_outputs)
# run scipy-based optimization (L-BFGS) over the pixels of the generated image
# so as to minimize the loss
ims = []
iterations = 5
x = preprocess_image(img_dir,img_height, img_width)
for i in range(iterations):
t = time.time()
# run L-BFGS
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(),
fprime=evaluator.grads, maxfun=7)
print(i,'Current loss value:', min_val,time.time()-t,'seconds.')
# decode the dream and save it
x = x.reshape(img_size)
img = deprocess_image(np.copy(x),img_height, img_width)
ims.append(img)
Explanation: Let's run it. We will run 5 iterations, in which we will forward the image, compute the gradients based on the loss and apply the gradients to the image.
End of explanation
f, axarr = plt.subplots(1, len(ims[:5]),figsize=(20,20))
for i,im in enumerate(ims[:5]):
axarr[i].imshow(im)
axarr[i].axis('off')
plt.show()
Explanation: We can display the image for the last 5 iterations:
End of explanation
plt.figure(figsize=(20,20))
plt.imshow(ims[-1])
plt.axis('off')
plt.show()
Explanation: And let's display the final image with higher resolution.
End of explanation |
15,266 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I want to use a logical index to slice a torch tensor. Which means, I want to select the columns that get a '1' in the logical index. | Problem:
import numpy as np
import pandas as pd
import torch
A_logical, B = load_data()
C = B[:, A_logical.bool()] |
15,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from this Jupyter notebook by Heiner Igel (@heinerigel), Lion Krischer (@krischer) and Taufiqurrahman (@git-taufiqurrahman) which is a supplemenatry material to the book Computational Seismology
Step1: Solving the 1D acoustic wave equation by finite-differences
In the previous lectures we derived acoustic and elastic approximations to the partial differential equations of motion, describing seismic wave propagation in 3D isotropic elastic media. For homogeneous acoustic media we derived analytical solutions in terms of Green's functions. Finally, we dealt with the discretization of material parameters and wavefields in continous media and the finite-difference approximation of partial derivatives.
In this lesson we wrap up the results of all previous lectures by solving the 1D acoustic wave equation for a homogeneous medium using the finite-difference method.
Finite difference solution
As derived in this and this lecture, the acoustic wave equation in 1D with constant density is
\begin{equation}
\frac{\partial^2 p(x,t)}{\partial t^2} \ = \ vp(x)^2 \frac{\partial^2 p(x,t)}{\partial x^2} + f(x,t) \nonumber
\end{equation}
with pressure $p$, acoustic velocity $vp$ and source term $f$. We can split the source term into a spatial and temporal part. Spatially, we assume that the source is localized at one point $x_s$. Therefore, the spatial source contribution is a Dirac $\delta$-function $\delta(x-x_s)$. The temporal source part is an arbitrary source wavelet $s(t)$
Step2: Source time function
To excitate wave propagation in our 1D homogenous model, we use the first derivative of the Gaussian
Step3: Analytical Solution
In this lecture we calculated the Green's functions for the homogenous acoustic wave equation
\begin{equation}
\frac{\partial^2}{\partial t^2} G(x,t;x_s, t_s) \ - \ vp_0^2 \Delta G(x,t;x_s, t_s) \ = \delta (x-x_s) \delta (t-t_s) \nonumber
\end{equation}
where $\Delta$ denotes the Laplace operator and the $\delta-$function is defined as
\begin{equation}
\delta(x) = \left{
\begin{array}{ll}
\infty &x=0 \
0 &x\neq 0
\end{array}
\right.\notag
\end{equation}
with the constraint
Step4: Comparison of numerical with analytical solution
In the code below we solve the homogeneous acoustic wave equation by the 3-point difference operator and compare the numerical results with the analytical solution. | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from this Jupyter notebook by Heiner Igel (@heinerigel), Lion Krischer (@krischer) and Taufiqurrahman (@git-taufiqurrahman) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, additional modifications by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries (PLEASE RUN THIS CODE FIRST!)
# ----------------------------------------------
import numpy as np
import matplotlib
# Show Plot in The Notebook
matplotlib.use("nbagg")
import matplotlib.pyplot as plt
# Sub-plot Configuration
# ----------------------
from matplotlib import gridspec
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
# Definition of modelling parameters
# ----------------------------------
nx = 1000 # number of grid points in x-direction
dx = 0.5 # grid point distance in x-direction
vp0 = 333. # wave speed in medium (m/s)
isrc = 499 # source location in grid in x-direction
ir = 730 # receiver location in grid in x-direction
nt = 1001 # maximum number of time steps
dt = 0.0010 # time step
Explanation: Solving the 1D acoustic wave equation by finite-differences
In the previous lectures we derived acoustic and elastic approximations to the partial differential equations of motion, describing seismic wave propagation in 3D isotropic elastic media. For homogeneous acoustic media we derived analytical solutions in terms of Green's functions. Finally, we dealt with the discretization of material parameters and wavefields in continous media and the finite-difference approximation of partial derivatives.
In this lesson we wrap up the results of all previous lectures by solving the 1D acoustic wave equation for a homogeneous medium using the finite-difference method.
Finite difference solution
As derived in this and this lecture, the acoustic wave equation in 1D with constant density is
\begin{equation}
\frac{\partial^2 p(x,t)}{\partial t^2} \ = \ vp(x)^2 \frac{\partial^2 p(x,t)}{\partial x^2} + f(x,t) \nonumber
\end{equation}
with pressure $p$, acoustic velocity $vp$ and source term $f$. We can split the source term into a spatial and temporal part. Spatially, we assume that the source is localized at one point $x_s$. Therefore, the spatial source contribution is a Dirac $\delta$-function $\delta(x-x_s)$. The temporal source part is an arbitrary source wavelet $s(t)$:
\begin{equation}
\frac{\partial^2 p(x,t)}{\partial t^2} \ = \ vp(x)^2 \frac{\partial^2 p(x,t)}{\partial x^2} + \delta(x-x_s)s(t) \nonumber
\end{equation}
Both second derivatives can be approximated by a 3-point difference formula. For example for the time derivative, we get:
\begin{equation}
\frac{\partial^2 p(x,t)}{\partial t^2} \ \approx \ \frac{p(x,t+dt) - 2 p(x,t) + p(x,t-dt)}{dt^2}, \nonumber
\end{equation}
and equivalently for the spatial derivative:
\begin{equation}
\frac{\partial^2 p(x,t)}{\partial x^2} \ \approx \ \frac{p(x+dx,t) - 2 p(x,t) + p(x-dx,t)}{dx^2}, \nonumber
\end{equation}
Injecting these approximations into the wave equation allows us to formulate the pressure p(x) for the time step $t+dt$ (the future) as a function of the pressure at time $t$ (now) and $t-dt$ (the past). This is called an explicit time integration scheme allowing the $extrapolation$ of the space-dependent field into the future only looking at the nearest neighbourhood.
After discretization of the P-wave velocity and pressure wavefield at the discrete spatial grid points $i = 0, 1, 2, ..., nx$ and time steps $n = 0, 1, 2, ..., nt$, we can replace the time-dependent part (upper index time, lower index space) by
\begin{equation}
\frac{p_{i}^{n+1} - 2 p_{i}^n + p_{i}^{n-1}}{\mathrm{d}t^2} \ = \ vp_{i}^2 \biggl( \frac{\partial^2 p}{\partial x^2}\biggr) \ + \frac{s_{i}^n}{dx} \nonumber
\end{equation}
The $\delta$-function $\delta(x-x_s)$ in the source term is approximated by the boxcar function:
$$
\delta_{bc}(x) = \left{
\begin{array}{ll}
1/dx &|x|\leq dx/2 \
0 &\text{elsewhere}
\end{array}
\right.
$$
Solving for $p_{i}^{n+1}$ leads to the extrapolation scheme:
\begin{equation}
p_{i}^{n+1} \ = \ vp_i^2 \mathrm{d}t^2 \left( \frac{\partial^2 p}{\partial x^2} \right) + 2p_{i}^n - p_{i}^{n-1} + \frac{\mathrm{d}t^2}{dx} s_{i}^n
\end{equation}
The spatial derivatives are determined by
\begin{equation}
\frac{\partial^2 p(x,t)}{\partial x^2} \ \approx \ \frac{p_{i+1}^{n} - 2 p_{i}^n + p_{i-1}^{n}}{\mathrm{d}x^2} \nonumber
\end{equation}
Eq. (1) is the essential core of the FD modelling code. Because we derived analytical solutions for wave propagation in a homogeneous medium, we should test our first code implementation for a similar medium, by setting:
\begin{equation}
vp_i = vp0\notag
\end{equation}
at each spatial grid point $i = 0, 1, 2, ..., nx$, in order to compare the numerical with the analytical solution. For a complete description of the problem we also have to define initial and boundary conditions. The initial condition is
\begin{equation}
p_{i}^0 = 0, \nonumber
\end{equation}
so the modelling starts with zero pressure amplitude at each spatial grid point $i = 0, 1, 2, ..., nx$. As boundary conditions, we assume
\begin{align}
p_{0}^n = 0, \nonumber\
p_{nx}^n = 0, \nonumber\
\end{align}
for all time steps n. This Dirichlet boundary condition, leads to artifical boundary reflections which would obviously not describe a homogeneous medium. For now, we simply extend the model, so that boundary reflections are not recorded at the receiver positions.
Let's implement it ...
End of explanation
# Plot Source Time Function
# -------------------------
f0 = 25. # dominant frequency of the source (Hz)
t0 = 4. / f0 # source time shift
print('Source frequency =', f0, 'Hz')
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Plot position configuration
# ---------------------------
plt.ion()
fig1 = plt.figure(figsize=(6, 5))
gs1 = gridspec.GridSpec(1, 2, width_ratios=[1, 1], hspace=0.3, wspace=0.3)
# Plot source time function
# -------------------------
ax1 = plt.subplot(gs1[0])
ax1.plot(time, src) # plot source time function
ax1.set_title('Source Time Function')
ax1.set_xlim(time[0], time[-1])
ax1.set_xlabel('Time (s)')
ax1.set_ylabel('Amplitude')
# Plot source spectrum
# --------------------
ax2 = plt.subplot(gs1[1])
spec = np.fft.fft(src) # source time function in frequency domain
freq = np.fft.fftfreq(spec.size, d = dt / 4.) # time domain to frequency domain
ax2.plot(np.abs(freq), np.abs(spec)) # plot frequency and amplitude
ax2.set_xlim(0, 250) # only display frequency from 0 to 250 Hz
ax2.set_title('Source Spectrum')
ax2.set_xlabel('Frequency (Hz)')
ax2.set_ylabel('Amplitude')
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position("right")
plt.show()
Explanation: Source time function
To excitate wave propagation in our 1D homogenous model, we use the first derivative of the Gaussian:
\begin{equation}
s(t) = -2 (t-t_0) f_0^2 exp(-f_0^2 (t-t_0)^2) \nonumber
\end{equation}
as source time function at the discrete source position $isrc$. Where $t_0$ denotes a time shift and $f_0$ the dominant frequency of the source.
End of explanation
# Analytical solution
# -------------------
G = time * 0.
# Initialize coordinates
# ----------------------
x = np.arange(nx)
x = x * dx # coordinate in x-direction
for it in range(nt): # Calculate Green's function (Heaviside function)
if (time[it] - np.abs(x[ir] - x[isrc]) / vp0) >= 0:
G[it] = 1. / (2 * vp0)
Gc = np.convolve(G, src * dt)
Gc = Gc[0:nt]
lim = Gc.max() # get limit value from the maximum amplitude
# Plotting convolution of Green's function with source wavelet
plt.plot(time, Gc)
plt.title("Analytical solution" )
plt.xlabel("Time, s")
plt.ylabel("Amplitude")
plt.grid()
plt.show()
Explanation: Analytical Solution
In this lecture we calculated the Green's functions for the homogenous acoustic wave equation
\begin{equation}
\frac{\partial^2}{\partial t^2} G(x,t;x_s, t_s) \ - \ vp_0^2 \Delta G(x,t;x_s, t_s) \ = \delta (x-x_s) \delta (t-t_s) \nonumber
\end{equation}
where $\Delta$ denotes the Laplace operator and the $\delta-$function is defined as
\begin{equation}
\delta(x) = \left{
\begin{array}{ll}
\infty &x=0 \
0 &x\neq 0
\end{array}
\right.\notag
\end{equation}
with the constraint:
\begin{equation}
\int_{-\infty}^{\infty} \delta(x)\; dx = 1.\notag
\end{equation}
When comparing numerical with analytical solutions the functions that - in the limit - lead to the $\delta-$function will become very important. An example is the boxcar function
$$
\delta_{bc}(x) = \left{
\begin{array}{ll}
1/dx &|x|\leq dx/2 \
0 &\text{elsewhere}
\end{array}
\right.
$$
fulfilling these properties as $dx\rightarrow0$. These functions are used to properly scale the source terms to obtain correct absolute amplitudes.
To describe analytical solutions for the acoustic wave equation we also make use of the unit step function, also known as the Heaviside function, defined as
$$
H(x) = \left{
\begin{array}{ll}
0 &x<0 \
1 &x \geq 0
\end{array}
\right.
$$
The Heaviside function is the integral of the $\delta-$function (and vice-versa the $\delta$-function is defined as the derivative of the Heaviside function). In 1D case, the Greens function is proportional to a Heaviside function.
$$
G=\frac{1}{2vp_0}H\left(t-\frac{|x|}{vp_0}\right)
$$
As the response to an arbitrary source time function can be obtained by convolution
$$
G_{seis} = G(x,t;x_s,t_s) * s(t),
$$
this implies that the propagating waveform is the integral of the source time function. The response is shown for a source time function with a 1st derivative of a Gaussian.
End of explanation
# Plot Snapshot & Seismogram (PLEASE RERUN THIS CODE AGAIN AFTER SIMULATION!)
# ---------------------------------------------------------------------------
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros(nx) # p at time n (now)
pold = np.zeros(nx) # p at time n-1 (past)
pnew = np.zeros(nx) # p at time n+1 (present)
d2px = np.zeros(nx) # 2nd space derivative of p
# Initialize model (assume homogeneous model)
# -------------------------------------------
vp = np.zeros(nx)
vp = vp + vp0 # initialize wave velocity in model
# Initialize empty seismogram
# ---------------------------
seis = np.zeros(nt)
# Plot position configuration
# ---------------------------
plt.ion()
fig2 = plt.figure(figsize=(6, 4))
gs2 = gridspec.GridSpec(1, 2, width_ratios=[1, 1], hspace=0.3, wspace=0.3)
# Plot 1D wave propagation
# ------------------------
# Note: comma is needed to update the variable
ax3 = plt.subplot(gs2[0])
leg1,= ax3.plot(isrc, 0, 'r*', markersize=11) # plot position of the source in snapshot
leg2,= ax3.plot(ir, 0, 'k^', markersize=8) # plot position of the receiver in snapshot
up31,= ax3.plot(p) # plot pressure update each time step
ax3.set_xlim(0, nx)
ax3.set_ylim(-lim, lim)
ax3.set_title('Time Step (nt) = 0')
ax3.set_xlabel('nx')
ax3.set_ylabel('Amplitude')
ax3.legend((leg1, leg2), ('Source', 'Receiver'), loc='upper right', fontsize=10, numpoints=1)
# Plot seismogram
# ---------------
# Note: comma is needed to update the variable
ax4 = plt.subplot(gs2[1])
leg3,= ax4.plot(0,0,'r--',markersize=1) # plot analytical solution marker
leg4,= ax4.plot(0,0,'b-',markersize=1) # plot numerical solution marker
up41,= ax4.plot(time, seis) # update recorded seismogram each time step
up42,= ax4.plot([0], [0], 'r|', markersize=15) # update time step position
ax4.yaxis.tick_right()
ax4.yaxis.set_label_position("right")
ax4.set_xlim(time[0], time[-1])
ax4.set_title('Seismogram')
ax4.set_xlabel('Time (s)')
ax4.set_ylabel('Amplitude')
ax4.legend((leg3, leg4), ('Analytical', 'FD'), loc='upper right', fontsize=10, numpoints=1)
plt.plot(time,Gc,'r--') # plot analytical solution
plt.show()
# 1D Wave Propagation (Finite Difference Solution)
# ------------------------------------------------
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
for i in range(1, nx - 1):
d2px[i] = (p[i + 1] - 2 * p[i] + p[i - 1]) / dx ** 2
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp ** 2 * dt ** 2 * d2px
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc] = pnew[isrc] + src[it] / dx * dt ** 2
# Remap Time Levels
# -----------------
pold, p = p, pnew
# Output Seismogram
# -----------------
seis[it] = p[ir]
# Update Data for Wave Propagation Plot
# -------------------------------------
idisp = 2 # display frequency
if (it % idisp) == 0:
ax3.set_title('Time Step (nt) = %d' % it)
up31.set_ydata(p)
up41.set_ydata(seis)
up42.set_data(time[it], seis[it])
plt.gcf().canvas.draw()
Explanation: Comparison of numerical with analytical solution
In the code below we solve the homogeneous acoustic wave equation by the 3-point difference operator and compare the numerical results with the analytical solution.
End of explanation |
15,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Keras Layer
Idea
Step1: AntiRectifier Layer
Step2: Parametrs and Settings
Step3: Data Preparation
Step4: Model with Custom Layer
Step5: Excercise
Compare with an equivalent network that is 2x bigger (in terms of Dense layers) + ReLU) | Python Code:
from keras.models import Sequential
from keras.layers import Dense, Dropout, Layer, Activation
from keras.datasets import mnist
from keras import backend as K
from keras.utils import np_utils
Explanation: Custom Keras Layer
Idea:
We build a custom activation layer called Antirectifier,
which modifies the shape of the tensor that passes through it.
We need to specify two methods: get_output_shape_for and call.
Note that the same result can also be achieved via a Lambda layer (keras.layer.core.Lambda).
```python
keras.layers.core.Lambda(function, output_shape=None, arguments=None)
```
Because our custom layer is written with primitives from the Keras backend (K), our code can run both on TensorFlow and Theano.
End of explanation
class Antirectifier(Layer):
'''This is the combination of a sample-wise
L2 normalization with the concatenation of the
positive part of the input with the negative part
of the input. The result is a tensor of samples that are
twice as large as the input samples.
It can be used in place of a ReLU.
# Input shape
2D tensor of shape (samples, n)
# Output shape
2D tensor of shape (samples, 2*n)
# Theoretical justification
When applying ReLU, assuming that the distribution
of the previous output is approximately centered around 0.,
you are discarding half of your input. This is inefficient.
Antirectifier allows to return all-positive outputs like ReLU,
without discarding any data.
Tests on MNIST show that Antirectifier allows to train networks
with twice less parameters yet with comparable
classification accuracy as an equivalent ReLU-based network.
'''
def compute_output_shape(self, input_shape):
shape = list(input_shape)
assert len(shape) == 2 # only valid for 2D tensors
shape[-1] *= 2
return tuple(shape)
def call(self, inputs):
inputs -= K.mean(inputs, axis=1, keepdims=True)
inputs = K.l2_normalize(inputs, axis=1)
pos = K.relu(inputs)
neg = K.relu(-inputs)
return K.concatenate([pos, neg], axis=1)
Explanation: AntiRectifier Layer
End of explanation
# global parameters
batch_size = 128
nb_classes = 10
nb_epoch = 10
Explanation: Parametrs and Settings
End of explanation
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
Explanation: Data Preparation
End of explanation
# build the model
model = Sequential()
model.add(Dense(256, input_shape=(784,)))
model.add(Antirectifier())
model.add(Dropout(0.1))
model.add(Dense(256))
model.add(Antirectifier())
model.add(Dropout(0.1))
model.add(Dense(10))
model.add(Activation('softmax'))
# compile the model
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# train the model
model.fit(X_train, Y_train,
batch_size=batch_size, epochs=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
Explanation: Model with Custom Layer
End of explanation
## your code here
Explanation: Excercise
Compare with an equivalent network that is 2x bigger (in terms of Dense layers) + ReLU)
End of explanation |
15,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minist 예제
Minist 예제를 살펴봅시다. 사실 minist 예제는 3장 다룬 기초적인 Neural Networw와 거의 동일 합니다.
단지, 입력 DataLoader를 사용하여 Minist dataset를 이용하는 부분만 차이가 나고, 데이터량이 많아서 시간이 좀 많이 걸리는 부분입니다.
입력 DataLoader를 이용하는 것은 4장에서 잠시 다루었기 때문에, 시간을 줄이기 위해서 cuda gpu를 사용하는 부분을 추가했습니다.
입력변수와 network상의 변수의 torch.Tensor를 cuda() 함수를 통해서 선언하면 됩니다.
python
is_cuda = torch.cuda.is_available()
if is_cuda
Step1: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 50, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 1000)
Step2: 2. 사전 설정
* model
* loss
* opimizer
Step3: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
Summary 추가 및 출력하기
간단하게 loss와 accuracy를 list에 추가하도록 나중에 matplotlib를 이용하여 출력하겠습니다.
이 부분은 train_loss, train_accu라는 list에 loss와 accuracy를 append하면 됩니다.
물론 Tensorboard같은 별도의 전용 tool이 있으면 편리하겠지만, 간단히 확인할 경우 큰 불편사항은 없습니다.
일단 빈 list를 선언한 다음,
python
train_loss = []
train_accu = []
train과정이나 test과정에 loss와 accuracy를 추가하면 됩니다.
```python
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
```
Step4: 4. Predict & Evaluate
Step5: 5. save model parameter
훈련시킨 model의 parameter를 파일에 저장한다. 다음장에서 저장한 parameter를 restore할 것입니다.
Step6: 6. plot images which failed to predict
여러장을 표시할 경우 별로 함수를 이용하여도 되나, pytorch에서 제공하는 함수를 이용하여 출력하여 보았습니다.
python
torchvision.utils.make_grid(tensor, nrow=8, padding=2)
plt.imshow할때도 약간 주의해야 하는데, array는 colordepth x Height x Width 되어 있지만, Height x Width x colordepth 형태로 바꾸어야 해서 transpose(1, 2, 0)를 수행하였습니다. | Python Code:
%matplotlib inline
Explanation: Minist 예제
Minist 예제를 살펴봅시다. 사실 minist 예제는 3장 다룬 기초적인 Neural Networw와 거의 동일 합니다.
단지, 입력 DataLoader를 사용하여 Minist dataset를 이용하는 부분만 차이가 나고, 데이터량이 많아서 시간이 좀 많이 걸리는 부분입니다.
입력 DataLoader를 이용하는 것은 4장에서 잠시 다루었기 때문에, 시간을 줄이기 위해서 cuda gpu를 사용하는 부분을 추가했습니다.
입력변수와 network상의 변수의 torch.Tensor를 cuda() 함수를 통해서 선언하면 됩니다.
python
is_cuda = torch.cuda.is_available()
if is_cuda : model.cuda()
if is_cuda : data, target = data.cuda(), target.cuda()
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
is_cuda = torch.cuda.is_available() # cuda사 사용가능시, True
batch_size = 50
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, transform=transforms.ToTensor()),
batch_size=1000)
Explanation: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 50, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 1000)
End of explanation
class MnistModel(nn.Module):
def __init__(self):
super(MnistModel, self).__init__()
# input is 28x28
# padding=2 for same padding
self.conv1 = nn.Conv2d(1, 32, 5, padding=2)
# feature map size is 14*14 by pooling
# padding=2 for same padding
self.conv2 = nn.Conv2d(32, 64, 5, padding=2)
# feature map size is 7*7 by pooling
self.fc1 = nn.Linear(64*7*7, 1024)
self.fc2 = nn.Linear(1024, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, 64*7*7) # reshape Variable
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = MnistModel()
if is_cuda : model.cuda()
loss_fn = nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
Explanation: 2. 사전 설정
* model
* loss
* opimizer
End of explanation
# trainning
model.train()
train_loss = []
train_accu = []
for epoch in range(3):
for i, (image, target) in enumerate(train_loader):
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image), Variable(target) # 입력image Target 설정
output = model(image) # model 생성
loss = loss_fn(output, target) #loss 생성
optimizer.zero_grad() # zero_grad
loss.backward() # calc backward grad
optimizer.step() # update parameter
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
if i % 300 == 0:
print(i, loss.data[0])
plt.plot(train_accu)
plt.plot(train_loss)
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
Summary 추가 및 출력하기
간단하게 loss와 accuracy를 list에 추가하도록 나중에 matplotlib를 이용하여 출력하겠습니다.
이 부분은 train_loss, train_accu라는 list에 loss와 accuracy를 append하면 됩니다.
물론 Tensorboard같은 별도의 전용 tool이 있으면 편리하겠지만, 간단히 확인할 경우 큰 불편사항은 없습니다.
일단 빈 list를 선언한 다음,
python
train_loss = []
train_accu = []
train과정이나 test과정에 loss와 accuracy를 추가하면 됩니다.
```python
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
```
End of explanation
model.eval()
correct = 0
for image, target in test_loader:
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
Explanation: 4. Predict & Evaluate
End of explanation
checkpoint_filename = 'minist.ckpt'
torch.save(model.state_dict(), checkpoint_filename)
Explanation: 5. save model parameter
훈련시킨 model의 parameter를 파일에 저장한다. 다음장에서 저장한 parameter를 restore할 것입니다.
End of explanation
model.eval()
image, target = iter(test_loader).next() #test_loader로 부터 한번만 dataset을 호출
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
## 이미지, 참값, 예측값을 numpy array로 변환
images = image.data.cpu().numpy()
cls_true = target.data.cpu().numpy().squeeze()
prediction = output.data.max(1)[1].cpu().numpy().squeeze()
# 예측값이 참값과 틀린것을 확인
incorrect = (prediction != cls_true)
# 예측이 틀린 것만을 추출
images = images[incorrect]
cls_true = cls_true[incorrect]
prediction = prediction[incorrect]
# 에러율을 표지
print('error : {:.1%}, number ={:}'.format(incorrect.sum()/len(incorrect), incorrect.sum()))
# 틀린 것들의 이미지를 표시
tensorImg = torch.Tensor(images)
plt.imshow(torchvision.utils.make_grid(tensorImg).numpy().transpose((1,2,0)))
plt.show()
# 틀린 것들의 예측치를 표시
print('prediction :')
pred_resized = np.pad(prediction, (0, 8 - len(prediction)%8), 'constant', constant_values=(0, 0))
print(pred_resized.reshape(-1,8))
print('\n')
# 틀린 것들의 참값을 표시
print('True :')
true_resized = np.pad(cls_true, (0, 8 - len(cls_true)%8), 'constant', constant_values=(0, 0))
print(true_resized.reshape(-1,8))
Explanation: 6. plot images which failed to predict
여러장을 표시할 경우 별로 함수를 이용하여도 되나, pytorch에서 제공하는 함수를 이용하여 출력하여 보았습니다.
python
torchvision.utils.make_grid(tensor, nrow=8, padding=2)
plt.imshow할때도 약간 주의해야 하는데, array는 colordepth x Height x Width 되어 있지만, Height x Width x colordepth 형태로 바꾸어야 해서 transpose(1, 2, 0)를 수행하였습니다.
End of explanation |
15,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Entrada de Dados
Step2: Executar comandos Linux
Step3: Instalar biblioteca
Step4: Saída de dados ricas
Step5: Utilizando a ajuda integrada
Use a tecla tab para executar o Intelisense
No final de um comando/função ou método adicione uma ?. E execute a célula SHIFT + ENTER.
Step6: Parar execução de um código mal comportado
CTRL+M I
Step7: Integração Drive e GitHub
Permite abrir e salvar arquivos nessas aplicações
Trabalhando de forma colaborativa
Compartilhando seu código
Adicionando e resolvendo comentários | Python Code:
print('Olá seja bem vindo!!')
Explanation: <a href="https://colab.research.google.com/github/cavalcantetreinamentos/curso_python/blob/master/Primeiros_passos_Google_Colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Aprendendo Google Colab - Cavalcante Treinamentos
Executar código é muito simples
Use CTRL + ENTER - Executa o código
ou SHIFT + ENTER - Executa o código e pula para a próxima célula
ou ALT + ENTER - Executa o código e cria uma nova célula
End of explanation
nome = input('Qual é seu nome: ')
print(nome + ' seja bem vindo ao curso de Python')
Explanation: Entrada de Dados
End of explanation
!cat /proc/cpuinfo
!cat /proc/meminfo
Explanation: Executar comandos Linux
End of explanation
!pip install requests
!pip install tensorflow==1.2
Explanation: Instalar biblioteca
End of explanation
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Fills and Alpha Example")
plt.show()
Explanation: Saída de dados ricas
End of explanation
import numpy as np
np.random?
Explanation: Utilizando a ajuda integrada
Use a tecla tab para executar o Intelisense
No final de um comando/função ou método adicione uma ?. E execute a célula SHIFT + ENTER.
End of explanation
while(True):
pass
Explanation: Parar execução de um código mal comportado
CTRL+M I
End of explanation
texto = 'Claudio'
Explanation: Integração Drive e GitHub
Permite abrir e salvar arquivos nessas aplicações
Trabalhando de forma colaborativa
Compartilhando seu código
Adicionando e resolvendo comentários
End of explanation |
15,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GRADEV
Step2: Gap robust allan deviation comparison
Compute the GRADEV of a white phase noise. Compares two different
scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV.
Step4: White phase noise
Compute the GRADEV of a nonstationary white phase noise. | Python Code:
%matplotlib inline
import pylab as plt
import numpy as np
import allantools
Explanation: GRADEV: gap robust allan deviation
Notebook setup & package imports
End of explanation
def example1():
Compute the GRADEV of a white phase noise. Compares two different
scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV.
N = 1000
f = 1
y = np.random.randn(1,N)[0,:]
x = np.linspace(1,len(y),len(y))
(x_ax, y_ax, [err_l, err_h], ns) = allantools.gradev(y,rate=f,taus=x)
plt.errorbar(x_ax, y_ax,yerr=[err_l,err_h],label='GRADEV, no gaps')
y[np.floor(0.4*N):np.floor(0.6*N)] = np.NaN # Simulate missing data
(x_ax, y_ax, [err_l, err_h], ns) = allantools.gradev(y,rate=f,taus=x)
plt.errorbar(x_ax, y_ax,yerr=[err_l,err_h], label='GRADEV, with gaps')
plt.xscale('log')
plt.yscale('log')
plt.grid()
plt.legend()
plt.xlabel('Tau / s')
plt.ylabel('Overlapping Allan deviation')
plt.show()
example1()
Explanation: Gap robust allan deviation comparison
Compute the GRADEV of a white phase noise. Compares two different
scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV.
End of explanation
def example2():
Compute the GRADEV of a nonstationary white phase noise.
N=1000 # number of samples
f = 1 # data samples per second
s=1+5/N*np.arange(0,N)
y=s*np.random.randn(1,N)[0,:]
x = np.linspace(1,len(y),len(y))
x_ax, y_ax, [err_l, err_h], ns = allantools.gradev(y, rate=f, taus=x)
plt.loglog(x_ax, y_ax,'b.',label="No gaps")
y[int(0.4*N):int(0.6*N,)] = np.NaN # Simulate missing data
x_ax, y_ax, [err_l, err], ns = allantools.gradev(y, rate=f, taus=x)
plt.loglog(x_ax, y_ax,'g.',label="With gaps")
plt.grid()
plt.legend()
plt.xlabel('Tau / s')
plt.ylabel('Overlapping Allan deviation')
plt.show()
example2()
Explanation: White phase noise
Compute the GRADEV of a nonstationary white phase noise.
End of explanation |
15,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step2: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook
Step5: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step6: Use interactive to build a user interface for exploing the draw_circle function
Step7: Use the display function to show the widgets created by interactive | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display, SVG
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
# YOUR CODE HERE
p =
<svg width="%d" height="%d">
<circle cx="%d" cy="%d" r="%d" fill="%s" />
</svg>
svg = p % (width, height, cx, cy, r, fill)
display(SVG(svg))
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
# YOUR CODE HERE
w = interactive(draw_circle, width=fixed(300), height=fixed(300), cx = (0,300), cy=(0,300), r = (0,50), fill= 'red');
w.children[0].min
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
# YOUR CODE HERE
w
assert True # leave this to grade the display of the widget
Explanation: Use the display function to show the widgets created by interactive:
End of explanation |
15,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process (GP) smoothing
This example deals with the case when we want to smooth the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis.
It is important to note that we are not dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.
If we assume the functional dependency between $x$ and $y$ is linear then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.
However, the functional form of $y=f(x)$ is not always known in advance, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it
Step1: Let's try a linear regression first
As humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory
Step2: Linear regression model recap
The linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have
Step3: Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
Step4: Let's also make a helper function for inferring the most likely values of $z$
Step5: Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice.
Exploring different levels of smoothing
Let's try to allocate 50% variance to the noise, and see if the result matches our expectations.
Step6: It appears that the variance is split evenly between the noise and the hidden process, as expected.
Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data
Step7: Smoothing "to the limits"
By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$
Step8: Interactive smoothing
Below you can interactively test different levels of smoothing. Notice, because we use a shared Theano variable to specify the smoothing above, the model doesn't need to be recompiled every time you move the slider, and so the inference is fast! | Python Code:
%pylab inline
figsize(12, 6);
import numpy as np
import scipy.stats as stats
x = np.linspace(0, 50, 100)
y = (np.exp(1.0 + np.power(x, 0.5) - np.exp(x/15.0)) +
np.random.normal(scale=1.0, size=x.shape))
plot(x, y);
xlabel("x");
ylabel("y");
title("Observed Data");
Explanation: Gaussian Process (GP) smoothing
This example deals with the case when we want to smooth the observed data points $(x_i, y_i)$ of some 1-dimensional function $y=f(x)$, by finding the new values $(x_i, y'_i)$ such that the new data is more "smooth" (see more on the definition of smoothness through allocation of variance in the model description below) when moving along the $x$ axis.
It is important to note that we are not dealing with the problem of interpolating the function $y=f(x)$ at the unknown values of $x$. Such problem would be called "regression" not "smoothing", and will be considered in other examples.
If we assume the functional dependency between $x$ and $y$ is linear then, by making the independence and normality assumptions about the noise, we can infer a straight line that approximates the dependency between the variables, i.e. perform a linear regression. We can also fit more complex functional dependencies (like quadratic, cubic, etc), if we know the functional form of the dependency in advance.
However, the functional form of $y=f(x)$ is not always known in advance, and it might be hard to choose which one to fit, given the data. For example, you wouldn't necessarily know which function to use, given the following observed data. Assume you haven't seen the formula that generated it:
End of explanation
plot(x, y);
xlabel("x");
ylabel("y");
lin = stats.linregress(x, y)
plot(x, lin.intercept + lin.slope * x);
title("Linear Smoothing");
Explanation: Let's try a linear regression first
As humans, we see that there is a non-linear dependency with some noise, and we would like to capture that dependency. If we perform a linear regression, we see that the "smoothed" data is less than satisfactory:
End of explanation
import pymc3 as pm
from theano import shared
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy import optimize
Explanation: Linear regression model recap
The linear regression assumes there is a linear dependency between the input $x$ and output $y$, sprinkled with some noise around it so that for each observed data point we have:
$$ y_i = a + b\, x_i + \epsilon_i $$
where the observation errors at each data point satisfy:
$$ \epsilon_i \sim N(0, \sigma^2) $$
with the same $\sigma$, and the errors are independent:
$$ cov(\epsilon_i, \epsilon_j) = 0 \: \text{ for } i \neq j $$
The parameters of this model are $a$, $b$, and $\sigma$. It turns out that, under these assumptions, the maximum likelihood estimates of $a$ and $b$ don't depend on $\sigma$. Then $\sigma$ can be estimated separately, after finding the most likely values for $a$ and $b$.
Gaussian Process smoothing model
This model allows departure from the linear dependency by assuming that the dependency between $x$ and $y$ is a Brownian motion over the domain of $x$. This doesn't go as far as assuming a particular functional dependency between the variables. Instead, by controlling the standard deviation of the unobserved Brownian motion we can achieve different levels of smoothness of the recovered functional dependency at the original data points.
The particular model we are going to discuss assumes that the observed data points are evenly spaced across the domain of $x$, and therefore can be indexed by $i=1,\dots,N$ without the loss of generality. The model is described as follows:
\begin{equation}
\begin{aligned}
z_i & \sim \mathcal{N}(z_{i-1} + \mu, (1 - \alpha)\cdot\sigma^2) \: \text{ for } i=2,\dots,N \
z_1 & \sim ImproperFlat(-\infty,\infty) \
y_i & \sim \mathcal{N}(z_i, \alpha\cdot\sigma^2)
\end{aligned}
\end{equation}
where $z$ is the hidden Brownian motion, $y$ is the observed data, and the total variance $\sigma^2$ of each ovservation is split between the hidden Brownian motion and the noise in proportions of $1 - \alpha$ and $\alpha$ respectively, with parameter $0 < \alpha < 1$ specifying the degree of smoothing.
When we estimate the maximum likelihood values of the hidden process $z_i$ at each of the data points, $i=1,\dots,N$, these values provide an approximation of the functional dependency $y=f(x)$ as $\mathrm{E}\,[f(x_i)] = z_i$ at the original data points $x_i$ only. Therefore, again, the method is called smoothing and not regression.
Let's describe the above GP-smoothing model in PyMC3
End of explanation
LARGE_NUMBER = 1e5
model = pm.Model()
with model:
smoothing_param = shared(0.9)
mu = pm.Normal("mu", sd=LARGE_NUMBER)
tau = pm.Exponential("tau", 1.0/LARGE_NUMBER)
z = GaussianRandomWalk("z",
mu=mu,
tau=tau / (1.0 - smoothing_param),
shape=y.shape)
obs = pm.Normal("obs",
mu=z,
tau=tau / smoothing_param,
observed=y)
Explanation: Let's create a model with a shared parameter for specifying different levels of smoothing. We use very wide priors for the "mu" and "tau" parameters of the hidden Brownian motion, which you can adjust according to your application.
End of explanation
def infer_z(smoothing):
with model:
smoothing_param.set_value(smoothing)
res = pm.find_MAP(vars=[z], fmin=optimize.fmin_l_bfgs_b)
return res['z']
Explanation: Let's also make a helper function for inferring the most likely values of $z$:
End of explanation
smoothing = 0.5
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
Explanation: Please note that in this example, we are only looking at the MAP estimate of the unobserved variables. We are not really interested in inferring the posterior distributions. Instead, we have a control parameter $\alpha$ which lets us allocate the variance between the hidden Brownian motion and the noise. Other goals and/or different models may require sampling to obtain the posterior distributions, but for our goal a MAP estimate will suffice.
Exploring different levels of smoothing
Let's try to allocate 50% variance to the noise, and see if the result matches our expectations.
End of explanation
smoothing = 0.9
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
Explanation: It appears that the variance is split evenly between the noise and the hidden process, as expected.
Let's try gradually increasing the smoothness parameter to see if we can obtain smoother data:
End of explanation
fig, axes = subplots(2, 2)
for ax, smoothing in zip(axes.ravel(), [0.95, 0.99, 0.999, 0.9999]):
z_val = infer_z(smoothing)
ax.plot(x, y)
ax.plot(x, z_val)
ax.set_title('Smoothing={:05.4f}'.format(smoothing))
Explanation: Smoothing "to the limits"
By increading the smoothing parameter, we can gradually make the inferred values of the hidden Brownian motion approach the average value of the data. This is because as we increase the smoothing parameter, we allow less and less of the variance to be allocated to the Brownian motion, so eventually it aproaches the process which almost doesn't change over the domain of $x$:
End of explanation
from IPython.html.widgets import interact
@interact(smoothing=[0.01,0.99])
def plot_smoothed(smoothing=0.9):
z_val = infer_z(smoothing)
plot(x, y);
plot(x, z_val);
title("Smoothing={}".format(smoothing));
Explanation: Interactive smoothing
Below you can interactively test different levels of smoothing. Notice, because we use a shared Theano variable to specify the smoothing above, the model doesn't need to be recompiled every time you move the slider, and so the inference is fast!
End of explanation |
15,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 2
Imports
Step2: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
Explanation: Algorithms Exercise 2
Imports
End of explanation
def find_peaks(a):
Find the indices of the local maxima in a sequence.
#empty list and make the parameter into an array
empty = []
f = np.array(a)
#Loop through the parameter and tell if it is a max
for i in range(len(f)):
if i == 0 and f[i] > f[i+1]:
empty.append(i)
if i == len(f)-1 and f[i]> f[i-1]:
empty.append(i)
if i > 0 and i < len(f)-1:
if f[i]>f[i-1] and f[i] > f[i+1]:
empty.append(i)
return empty
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
#iterate through pi_digits_str
f = [c for c in pi_digits_str]
#find peaks in f
x = find_peaks(f)
#graph
plt.hist(np.diff(x),10, align = 'left')
plt.xticks(range(0,11))
plt.title("Differences of Local Maxima for pi");
plt.xlabel("Difference");
plt.ylabel("Frequency");
assert True # use this for grading the pi digits histogram
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation |
15,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Tree Classification
Create entry points to spark
Step1: Decision tree classification with pyspark
Import data
Step2: Process categorical columns
The following code does three things with pipeline
Step3: Build StringIndexer stages
Step4: Build OneHotEncoder stages
Step5: Build VectorAssembler stage
Step6: Build pipeline model
Step7: Fit pipeline model
Step8: Transform data
Step9: Split data into training and test datasets
Step10: Build cross-validation model
Estimator
Step11: Parameter grid
Step12: Evaluator
Step13: Build cross-validation model
Step14: Fit cross-validation mode
Step15: Prediction
Step16: Prediction on training data
Step17: Prediction on test data
Step18: Confusion matrix
Pyspark doesn’t have a function to calculate the confusion matrix automatically, but we can still easily get a confusion matrix with a combination use of several methods from the RDD class.
Step19: Parameters from the best model | Python Code:
from pyspark import SparkContext
sc = SparkContext(master = 'local')
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
Explanation: Decision Tree Classification
Create entry points to spark
End of explanation
cuse = spark.read.csv('data/cuse_binary.csv', header=True, inferSchema=True)
cuse.show(5)
Explanation: Decision tree classification with pyspark
Import data
End of explanation
from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler
from pyspark.ml import Pipeline
# categorical columns
categorical_columns = cuse.columns[0:3]
Explanation: Process categorical columns
The following code does three things with pipeline:
StringIndexer all categorical columns
OneHotEncoder all categorical index columns
VectorAssembler all feature columns into one vector column
Categorical columns
End of explanation
stringindexer_stages = [StringIndexer(inputCol=c, outputCol='strindexed_' + c) for c in categorical_columns]
# encode label column and add it to stringindexer_stages
stringindexer_stages += [StringIndexer(inputCol='y', outputCol='label')]
Explanation: Build StringIndexer stages
End of explanation
onehotencoder_stages = [OneHotEncoder(inputCol='strindexed_' + c, outputCol='onehot_' + c) for c in categorical_columns]
Explanation: Build OneHotEncoder stages
End of explanation
feature_columns = ['onehot_' + c for c in categorical_columns]
vectorassembler_stage = VectorAssembler(inputCols=feature_columns, outputCol='features')
Explanation: Build VectorAssembler stage
End of explanation
# all stages
all_stages = stringindexer_stages + onehotencoder_stages + [vectorassembler_stage]
pipeline = Pipeline(stages=all_stages)
Explanation: Build pipeline model
End of explanation
pipeline_model = pipeline.fit(cuse)
Explanation: Fit pipeline model
End of explanation
final_columns = feature_columns + ['features', 'label']
cuse_df = pipeline_model.transform(cuse).\
select(final_columns)
cuse_df.show(5)
Explanation: Transform data
End of explanation
training, test = cuse_df.randomSplit([0.8, 0.2], seed=1234)
Explanation: Split data into training and test datasets
End of explanation
from pyspark.ml.regression import GeneralizedLinearRegression
from pyspark.ml.classification import LogisticRegression, DecisionTreeClassifier
dt = DecisionTreeClassifier(featuresCol='features', labelCol='label')
Explanation: Build cross-validation model
Estimator
End of explanation
from pyspark.ml.tuning import ParamGridBuilder
param_grid = ParamGridBuilder().\
addGrid(dt.maxDepth, [2,3,4,5]).\
build()
Explanation: Parameter grid
End of explanation
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction", metricName="areaUnderROC")
Explanation: Evaluator
End of explanation
from pyspark.ml.tuning import CrossValidator
cv = CrossValidator(estimator=dt, estimatorParamMaps=param_grid, evaluator=evaluator, numFolds=4)
Explanation: Build cross-validation model
End of explanation
cv_model = cv.fit(cuse_df)
Explanation: Fit cross-validation mode
End of explanation
show_columns = ['features', 'label', 'prediction', 'rawPrediction', 'probability']
Explanation: Prediction
End of explanation
pred_training_cv = cv_model.transform(training)
pred_training_cv.select(show_columns).show(5, truncate=False)
Explanation: Prediction on training data
End of explanation
pred_test_cv = cv_model.transform(test)
pred_test_cv.select(show_columns).show(5, truncate=False)
Explanation: Prediction on test data
End of explanation
label_and_pred = cv_model.transform(cuse_df).select('label', 'prediction')
label_and_pred.rdd.zipWithIndex().countByKey()
Explanation: Confusion matrix
Pyspark doesn’t have a function to calculate the confusion matrix automatically, but we can still easily get a confusion matrix with a combination use of several methods from the RDD class.
End of explanation
print('The best MaxDepth is:', cv_model.bestModel._java_obj.getMaxDepth())
Explanation: Parameters from the best model
End of explanation |
15,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Step1: Visualizing objective functions by interpolating in randomly drawn directions
Motivation
Useful visualizations of high dimensional objective functions are challenging and often require projecting to a low dimension.
Contribution
We introduce a new technique for visualizing the local objective function that provides information about the gradient, curvature, and flatness by estimating the objective function at pertubations around a selected point.
Step3: Background
We are interested in the problem of searching for a set of parameters $x^\in\mathbb{R}^n$ that minimize the loss function $L(x)\in\mathbb{R}$
Step7: This function has a saddle point at $(0,0)$ and a range of local optima along $x_1 = \frac{1}{x_2}$ as shown in the following figure.
Step8: A first step
Step9: A Linear interpolation would be correct in pointing us to the fact that there is a local optima, but would mislead us into thinking that there was no path from the left optima to the right optima. In fact, if we plot the loss function, we see that this slice actually goes through a region of high loss before making it into another region with a local optima.
Step13: Proposed Approach
Step18: We now show the scatter plots from our proposed technique.
Step19: We now visualize the point given from the first section.
Step20: Our technique correctly recovers that this point is a flat local optimum.
Step24: These help us distinguish between optima which have flat regions around them as well as saddle points. Note that the flatness is defined because of the points around 0. We contrast this with a quadratic function with one local minima has no points around zero
Step25: Visualizing Objective Functions for Fashion MNIST
We demonstrate the utility of our technique for visualizing the loss function for neural networks. We first visualize the loss function for a single layer neural network, better known as logistic regression. Since there are no non-linearitites in this scenario, the categorical cross entropy loss is convex. We sanity check our technique by visualizing the loss around the initializer and a minimizer found by optimizing using large batch stochastic gradient descent. We evaluate the loss using the whole dataset.
Negative Log Likelihood and Logistic Regression
Step27: Around the initializer, the loss function is linear.
Step28: footnote
Step31: Tracking negative curvature | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
End of explanation
#@title Import Statements
%pylab inline
import time
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import tensorflow as tf
import seaborn as sns
import scipy
tf.enable_eager_execution(config=tf.ConfigProto(log_device_placement=True))
tf.logging.set_verbosity(tf.logging.DEBUG)
tfe = tf.contrib.eager
sns.set_context('notebook', font_scale=1.25)
sns.set_style('whitegrid')
Explanation: Visualizing objective functions by interpolating in randomly drawn directions
Motivation
Useful visualizations of high dimensional objective functions are challenging and often require projecting to a low dimension.
Contribution
We introduce a new technique for visualizing the local objective function that provides information about the gradient, curvature, and flatness by estimating the objective function at pertubations around a selected point.
End of explanation
def L_goodfellow_saddle(x):
Function given in Goodfellow et al. Accepts a np.array of dim 2.
return (1-x[0]*x[1]) ** 2
Explanation: Background
We are interested in the problem of searching for a set of parameters $x^\in\mathbb{R}^n$ that minimize the loss function $L(x)\in\mathbb{R}$: $x^ = \arg\min_x L(x)$. In the case where $L(x)$ is convex, a well tuned algorithm like gradient descent will converge to a global optimum. In the case where $L(x)$ is non-convex there might be many critical points: local optima, plateaus, saddle points. We can use the Hessian, $H$, distinguish between these critical points. However, if the loss function is high dimensional, computing the Hessian is not computationally efficient.
Existing techniques (Goodfellow et al 2015) visualize one or two dimensional slices of the objective by interpolating between selected points. However, these techniques are limited to a few-dimensions and conclusions that are made from them can be limiting (Draxler et al 2018). Here we describe a new analysis technique for visualizing the landscape.
Method
In this section, we briefly describe the proposed technique. To understand how $L(x)$ changes around $x_0$, our method proceeds by repeatedly drawing vectors $d_i=\frac{d_i'}{||d_i'||}\in\mathbb{R^n}$ where $d_i'\sim N(0, I)$. We then evaluate $L_{+\alpha}=L(x + \alpha d_i)$ to understand how the loss function would change if we made a step in that direction: If we were at a local minima, all directions, $d_i$, would result in the loss increasing. If most directions resulted in a positive change and some negative, we might be close to a local optima or at a saddle point. To disambiguate these two situations, we also evaluate $L_{-\alpha}=L(x_0 - \alpha d_i)$. By evaluating the centered pair of data points: $(L(x_0) - L_{+\alpha}, L(x_0) - L_{-\alpha})$ we can distinguish the following cases:
All pairs have both members negative implies $x_0$ is a local optimum
All pairs have one positive and one negative member implies $x_0$ might be in a linear region.
Some pairs have either both members positive or both members negative implies $x_0$ is a saddle point.
In additon, if the changes were close to zero then we could be in a flat region.
Visualizing toy loss functions
We claim that our method provides insightful visualizations of functions with complicated geometry. For example, local minima, saddle points and plateaus. To demonstrate the utilitiy of the method in distinguishing the situations shown in Method we use the function $L(x)=(1-x_1 x_2)^2$ where $x=(x_1, x_2)$ as in Goodfellow et al. 2015.
End of explanation
#@title Run this cell to initialize the library of tools being used in this notebook.
def plot_contours(
L_fn,
ylims=(-2., 2.),
xlims=(-2., 2.),
nx=10,
ny=10,
ax=None,
show_contour_lines=False,
**plot_kwargs):
Plots the contours of the function in 2D space.
Args:
L_fn: The loss function that accepts a np.ndarray of dim 2.
ylims: A tuple of floats containing the limits on the y-axis.
xlims: A tuple of floats containing the limits on the x-axis.
nx: The integer number of points from the x-domain.
ny: The integer number of points from the y-domain.
ax: A matplotlib.axes instance to do plots on.
**plot_kwargs: Other arguments that will be passed onto the plotter.
Returns:
ax: A matplotlib.axes instance with the figure plotted.
J: A np.ndarray of shape (nx*ny, ) with evaluations of the function.
xy_vectors: A np.ndarray of shape (nx*ny, 2) of the evaluated points.
if ax is None: ax = plt.figure().add_subplot(111)
# Get points to evaluat the function at.
x = np.linspace(*xlims, num=nx)
y = np.linspace(*ylims, num=ny)
X, Y = np.meshgrid(x, y) # Mesh grid for combinations.
xy_vectors = np.stack([X, Y], axis=2).reshape(-1, 2) # Reshape into a batch.
# Batch apply the function:
J = np.apply_along_axis(L_fn, 1, xy_vectors).reshape(nx, ny)
cs = ax.contourf(x, y, J, **plot_kwargs) # Plot!
if show_contour_lines: ax.contour(cs, colors='gray')
if show_contour_lines: ax.clabel(cs, inline=True, fontsize=10, colors='gray')
return ax, J, xy_vectors
# TODO(zaf): See if there are tools built into tensorflow that does this.
def get_flat_params(parameters):
Returns flattened model parameters.
Given a list of tensorflow variables, this returns a numpy array
containing a flat representation of all the parameters.
Only works in eager mode.
Args:
parameters: The iterable containing the tf.Variable objects.
Returns:
A numpy array containing the parameters.
params = []
for param in parameters:
params.append(param.numpy().reshape(-1))
return np.concatenate(params)
def set_flat_params(model, flat_params, trainable_only=True):
Set model parameters with a linear numpy array.
Takes a flat tensor containing parameters and sets the model with
those parameters.
Args:
model: The tf.keras.Model object to set the params of.
flat_params: The flattened contiguous 1D numpy array containing
the parameters to set.
trainable_only: Set only the trainable parameters.
Returns:
The keras model from `model` but with the parameters set to `flat_params`.
idx = 0
if trainable_only:
variables = model.trainable_variables
else:
variables = model.variables
for param in variables:
# This will be 1 if param.shape is empty, corresponding to a single value.
flat_size = int(np.prod(list(param.shape)))
flat_param_to_assign = flat_params[idx:idx + flat_size]
# Explicit check here because of: b/112443506
if len(param.shape): # pylint: disable=g-explicit-length-test
flat_param_to_assign = flat_param_to_assign.reshape(*param.shape)
else:
flat_param_to_assign = flat_param_to_assign[0]
param.assign(flat_param_to_assign)
idx += flat_size
return model
X_LABEL = r'$L(x_0+\alpha d)-L(x_0)$'
Y_LABEL = r'$L(x_0-\alpha d)-L(x_0)$'
# plt.figure(figsize=(5, 4))
# ax = plt.gca()
ax, _, _ = plot_contours(
L_goodfellow_saddle, nx=100, ny=100, cmap='viridis_r',
ylims=(-2, 2), xlims=(-2, 2),
levels=np.arange(-0.5, 5.0, 0.1).tolist(), ax=None)
ax.set_xlabel(r'$\theta[0]$')
ax.set_ylabel(r'$\theta[1]$')
ax.set_title(r'$L(\theta) = (1-\theta[0]\theta[1])^2$')
# plt.xlim(-2.5, 2.5)
# plt.ylim(-2.5, 2.5)
# plt.plot(x0, x1, linestyle='--', color='k')
plt.text(0, 0, 'Saddle', )
optima_manifold = np.linspace(-2, 2)
# ax.scatter(list(zip(optima_manifold, 1/optima_manifold)))
plt.text(-1.5, -1, 'Local Optimas')
plt.text(0.5, 1, 'Local Optimas')
Explanation: This function has a saddle point at $(0,0)$ and a range of local optima along $x_1 = \frac{1}{x_2}$ as shown in the following figure.
End of explanation
x0 = np.array([-2, -0.5])
x1 = np.array([-0.5, -2.])
alphas = np.linspace(-0.5, 1.5, num=50)
L_vals = []
for alpha in alphas:
new_x = (1-alpha)*x0 + alpha*x1
L_vals.append(-L_goodfellow_saddle(new_x))
plt.figure(figsize=(5, 4))
plt.plot(alphas, L_vals)
plt.xlabel(r'Interpolation Coefficient, $\alpha$')
plt.ylabel(r'$\mathcal{L}((1-\alpha)\theta_0+\alpha\theta_1$)')
plt.xticks([0.0, 1.0])
plt.tight_layout()
# plt.savefig('demo_interpolation.pdf')
Explanation: A first step: linear interpolations
We first look at an example where we contrast linear interpolation with the proposed technique.
End of explanation
plt.figure(figsize=(5, 4))
ax = plt.gca()
ax, _, _ = plot_contours(
L_goodfellow_saddle, nx=100, ny=100, cmap='viridis_r',
ylims=(-2.25, 0.05), xlims=(-2.25, 0.05),
show_contour_lines=True,
levels=[0.0, 0.1, 1.0, 2.0, 5.0], ax=ax)
ax.set_xlabel(r'$\theta[0]$')
ax.set_ylabel(r'$\theta[1]$')
plt.xlim(-2.25, 0.05)
plt.ylim(-2.25, 0.05)
plt.plot(x0, x1, linestyle='--', color='k')
plt.text(x0[0], x0[1], r'$\theta_1$')
plt.text(x1[0], x1[1], r'$\theta_2$')
plt.tight_layout()
# plt.savefig('demo_curve.pdf')
Explanation: A Linear interpolation would be correct in pointing us to the fact that there is a local optima, but would mislead us into thinking that there was no path from the left optima to the right optima. In fact, if we plot the loss function, we see that this slice actually goes through a region of high loss before making it into another region with a local optima.
End of explanation
def sample_directions(x_dim, num_samples=100):
Sample normalized random directions.
Args:
L_fn: A function that accepts a np.ndarray and returns the loss
as a float at that point.
x0: A np.ndarray representing the point around which to sample.
norm: The maximum norm of the movement direction.
num_samples: The number of samples to obtain.
Returns:
A np.ndarray of shape (num_samples, x_dim) such that the L2 norms are 1
along the x_dim.
random_directions = np.random.normal(size=(num_samples, x_dim))
random_directions /= np.linalg.norm(random_directions, axis=1).reshape(-1, 1)
return random_directions
def get_purturbed_directions(x0, step_size=1.0, num_samples=100):
Get perturbed parameters.
Args:
x0: A np.ndarray representing the central parameter to perturb.
step_size: A float representing the size of the step to move in.
num_samples: The integer number of samples to draw.
Returns:
Two np.ndarrays representing x0 perturbed by adding a random direction and
minusing it. They are paired so that they move by the same direction at each
index.
directions = sample_directions(x0.shape[0], num_samples)
forward_step_points = x0.reshape(1, -1) + step_size * directions
backward_step_points = x0.reshape(1, -1) - step_size * directions
return forward_step_points, backward_step_points
def get_sampled_loss_function(
L_fn, x0, step_size=1.0, num_samples=100, x0_samples=1, return_points=False):
Sample the loss function around the perturbations.
Args:
L_fn: A callable function that takes a np.ndarray representing parameters
and returns the loss.
x0: A np.ndarray representing the central parameter to perturb.
step_size: A float representing the size of the step to move in.
num_samples: The integer number of samples to draw.
x0_samples: The integer number of times to sample x0 (default is 1). Set > 1
if the loss function is stochastic.
forward_step_points, backward_step_points = get_purturbed_directions(
x0, step_size, num_samples)
if x0_samples == 1:
L_eval = L_fn(x0)
else:
L_eval = np.mean([L_fn(x0) for _ in range(x0_samples)])
L_forward_eval = np.apply_along_axis(L_fn, 1, forward_step_points) - L_eval
L_backward_eval = np.apply_along_axis(L_fn, 1, backward_step_points) - L_eval
if return_points:
return (
L_forward_eval,
L_backward_eval,
forward_step_points,
backward_step_points)
else:
return L_forward_eval, L_backward_eval
Explanation: Proposed Approach
End of explanation
#######
## Define some simple loss functions for exposition.
#######
def L_quad(x):
Purely quadratic function.
return - x[0]**2 - 2.*x[1]**2
def L_flat_quad(x):
A quadratic function with one direction weighted with 0.
return -x[0]**2 - 0.*x[1]**2
def L_saddle(x):
A function with a saddle point
return -2*x[0]**2 + 2*x[1]**2
def L_linear(x):
A linear function.
return -2*x[0] + 2*x[1]
plt.figure(figsize=(8, 5))
plt.subplot(121)
forward_samples, backward_samples = get_sampled_loss_function(
L_quad, np.array([0.0, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Strict local optimum')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.subplot(122)
forward_samples, backward_samples = get_sampled_loss_function(
L_flat_quad, np.array([0.0, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Flat local optimum')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.tight_layout()
# plt.savefig('LM_scatter_comparisons.pdf')
plt.figure(figsize=(10, 6))
plt.subplot(131)
forward_samples, backward_samples = get_sampled_loss_function(
L_saddle, np.array([0.0, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Saddle point')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.subplot(132)
forward_samples, backward_samples = get_sampled_loss_function(
L_quad, np.array([0.1, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Close to local optimum')
plt.gca().set_xlim(-0.04, 0.04)
plt.gca().set_ylim(-0.04, 0.04)
plt.subplot(133)
forward_samples, backward_samples = get_sampled_loss_function(
L_linear, np.array([0.1, 0.0]), step_size=0.1)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.gca().set_aspect('equal')
plt.title(r'Linear region')
# plt.gca().set_xlim(-0.04, 0.04)
# plt.gca().set_ylim(-0.04, 0.04)
plt.tight_layout()
plt.savefig('linear_scatter_comparisons.pdf')
Explanation: We now show the scatter plots from our proposed technique.
End of explanation
plt.figure(figsize=(5, 4))
# plt.subplot(121)
forward_samples, backward_samples = get_sampled_loss_function(L_goodfellow_saddle, np.array([-0.5, -2.]), step_size=0.1)
plt.scatter(-forward_samples, -backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.plot(np.linspace(-0.05, 0.01), np.linspace(-0.05, 0.01), linestyle='--', color='k')
plt.plot(np.linspace(-0.005, 0.005), -np.linspace(-0.005, 0.005), linestyle=':', color='k')
plt.xlabel(r'$\mathcal{L}(\theta_0+\alpha d)-\mathcal{L}(\theta_0)$')
plt.ylabel(r'$\mathcal{L}(\theta_0-\alpha d)-\mathcal{L}(\theta_0)$')
plt.gca().set_aspect('equal')
plt.tight_layout()
# plt.title(r'$x_0=(-0.5, -2)$: Minima')
# plt.savefig('demo_scatter2.pdf')
Explanation: We now visualize the point given from the first section.
End of explanation
# files.download('linear_scatter_comparisons.pdf')
# files.download('LM_scatter_comparisons.pdf')
plt.figure(figsize=(10, 4))
plt.subplot(121)
forward_samples, backward_samples = get_sampled_loss_function(L_goodfellow_saddle, np.array([-0.5, -2.]), step_size=0.25)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'$x_0=(-0.5, -2)$: Minima')
plt.subplot(122)
forward_samples, backward_samples = get_sampled_loss_function(L_goodfellow_saddle, np.array([0., 0.]), step_size=0.25)
plt.scatter(forward_samples, backward_samples, s=15, marker='x')
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'$x_0=(0.0, 0.0)$: Sadde Point')
plt.tight_layout()
Explanation: Our technique correctly recovers that this point is a flat local optimum.
End of explanation
# These axes recover the gradient and curvature spectrum when projecting the 2D
# scatter plot evaluations.
CURVATURE_AX = np.array([1, 1]) # x = y
GRADIENT_AX = np.array([1, -1]) # x = -y
def scalar_project(x, v):
Calculate the scalar projection of vector x onto vector v.
v_hat = v / np.linalg.norm(v)
return np.dot(v_hat, x)
def get_gradient_projection(values_centered):
Project 2D points onto the x=-y axis which gives gradient information.
return np.apply_along_axis(
lambda x: scalar_project(x, GRADIENT_AX), 1, values_centered)
def get_curvature_projection(values_centered):
Project 2D points onto the x=y axis which gives curvature information.
return np.apply_along_axis(
lambda x: scalar_project(x, CURVATURE_AX), 1, values_centered)
plt.figure(figsize=(5, 4))
plt.subplot(211)
forward_samples, backward_samples = get_sampled_loss_function(
L_basic, np.array([-0.5, -2.]), step_size=0.1,num_samples=1000)
projections = get_gradient_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Gradient value')
plt.ylabel('Count')
plt.subplot(212)
projections = get_curvature_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Curvature value')
plt.ylabel('Count')
plt.tight_layout()
plt.savefig('demo_spectra_joined.pdf')
plt.figure(figsize=(5, 4))
plt.subplot(211)
forward_samples, backward_samples = get_sampled_loss_function(
L_goodfellow_saddle, np.array([-0.5, -2.]), step_size=0.1,num_samples=1000)
projections = get_gradient_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Gradient value')
plt.ylabel('Count')
plt.subplot(212)
projections = get_curvature_projection(np.array([forward_samples, backward_samples]).T)
plt.hist(projections, bins=50)
plt.xlabel('Curvature value')
plt.ylabel('Count')
plt.tight_layout()
plt.savefig('demo_spectra_joined.pdf')
Explanation: These help us distinguish between optima which have flat regions around them as well as saddle points. Note that the flatness is defined because of the points around 0. We contrast this with a quadratic function with one local minima has no points around zero:
Obtaining gradient and curvature information from visualizations
We claim that this method can provide information regarding gradient and curvature of the local loss function. Let us assume that our loss function is locally quadratic: $L(x) = a^T x + \frac{1}{2}x^THx$. We have that:
$$\frac{1}{2}(L(x_0 + \alpha d) - L(x_0 -\alpha d))=\alpha a^Td$$
and
$$\frac{1}{2}(L(x_0 + \alpha d) + L(x_0 -\alpha d))=\alpha^2d^THd$$
which correspond to projecting $(L(x_0+\alpha d), L(x_0-\alpha d))$ on the $x=-y$ and $x=y$ axes respectively. Therefore, projections of our scatter plots capture information about the components of the gradient and Hessian in the random direction $d$. By repeatedly sampling many directions we eventually recover how the gradient and curvature vary in many directions around $x_0$. We can use a histogram to describe the density of these curvatures In particular, the maximum and minimum curvature values obtained from this technique are close to the maximum and minimum eigenvalues of $H$. This curvature spectrum is related to eigenvalue spectra which have been used before to analyze neural networks
End of explanation
#@title Load Fashion MNIST.
(X_train, Y_train), _ = tf.keras.datasets.fashion_mnist.load_data()
dataset_size = len(X_train)
output_size = 10
# Convert the array to float32 and normalize by 255
# Add a dim to represent the RGB index for CNNs
X_train = np.expand_dims((X_train.astype(np.float32) / 255.0), -1)
image_size = X_train.shape[1:]
Y_train = tf.keras.utils.to_categorical(Y_train, output_size).astype(np.float32)
#@title Create a simple Network.
with tf.device('gpu:0'):
model = tf.keras.Sequential(
[
tf.keras.layers.Flatten(input_shape=image_size),
tf.keras.layers.Dense(output_size, activation=tf.nn.softmax)
]
)
learning_rate = tf.Variable(0.1, trainable=False)
optimizer = tf.train.MomentumOptimizer(
learning_rate=learning_rate,
momentum=0.9)
get_decayed_learning_rate = tf.train.polynomial_decay(
learning_rate,
tf.train.get_or_create_global_step(),
5000000,
learning_rate.numpy() * 0.001)
# model.call = tfe.defun(model.call)
# We will now compile and print out a summary of our model
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.summary()
model_copy = tf.keras.models.clone_model(model)
model_copy.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
def L_fn_mnist(weights):
# Closure that allows us to get information about the loss function.
set_flat_params(model_copy, weights)
loss, _ = model_copy.evaluate(
full_dataset.make_one_shot_iterator(),
steps=1,
verbose=0)
return loss
model.get_weights()
Explanation: Visualizing Objective Functions for Fashion MNIST
We demonstrate the utility of our technique for visualizing the loss function for neural networks. We first visualize the loss function for a single layer neural network, better known as logistic regression. Since there are no non-linearitites in this scenario, the categorical cross entropy loss is convex. We sanity check our technique by visualizing the loss around the initializer and a minimizer found by optimizing using large batch stochastic gradient descent. We evaluate the loss using the whole dataset.
Negative Log Likelihood and Logistic Regression
End of explanation
# Save the initializer.
initializer_weights = get_flat_params(model.variables)
start = time.time()
forward_samples, backward_samples = get_sampled_loss_function(
L_fn_mnist,
initializer_weights,
step_size=1.0,
num_samples=200,
x0_samples=10)
# Free Doubling of points..
plt.scatter(
np.concatenate([forward_samples, backward_samples]),
np.concatenate([backward_samples, forward_samples]), s=2, marker='x')
print('total time: {}'.format(time.time() - start))
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'Fashion MNIST at initializer')
plt.tight_layout()
Create some callbacks to allow us to print learning rate and other things.
tf.train.get_or_create_global_step().assign(0)
BATCH_SIZE=60000
FULL_DATASET_SIZE = X_train.shape[0]
# Create the dataset
original_dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train))
dataset = original_dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(1)
dataset = dataset.repeat()
full_dataset = original_dataset.batch(FULL_DATASET_SIZE)
def lr_decay_callback(*args):
learning_rate.assign(get_decayed_learning_rate())
tf.train.get_or_create_global_step().assign_add(1)
def lr_print_callback(epoch, logs):
step = tf.train.get_or_create_global_step().numpy()
if step % 50 == 0 or step == 0:
print(
'Step {}, Learning rate: {}, Metrics: {}'.format(
step, learning_rate.numpy(),logs))
learning_rate_decay_callback = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lr_decay_callback)
learning_rate_print_callback = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lr_print_callback)
callbacks = [learning_rate_decay_callback, learning_rate_print_callback]
# Train!
EPOCHS=15000
history = model.fit(
dataset, epochs=EPOCHS, steps_per_epoch=FULL_DATASET_SIZE // BATCH_SIZE,
callbacks=callbacks, verbose=0)
Explanation: Around the initializer, the loss function is linear.
End of explanation
np.save('weights.npy', model.get_weights())
from google.colab import files
files.download('weights.npy') # NTS: save to google drive for week Oct 22
final_weights = get_flat_params(model.variables)
np.save('final_weights_flat.npy', final_weights)
files.download('final_weights_flat.npy') # NTS: save to google drive for week Oct 22
start = time.time()
forward_samples, backward_samples = get_sampled_loss_function(
L_fn_mnist, final_weights, step_size=1.0, num_samples=200, x0_samples=10)
# Free Doubling of points..
plt.scatter(
np.concatenate([forward_samples, backward_samples]),
np.concatenate([backward_samples, forward_samples]), s=2, marker='x')
print('total time: {}'.format(time.time() - start))
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'Fashion MNIST at final point')
plt.tight_layout()
start = time.time()
forward_samples, backward_samples = get_sampled_loss_function(
L_fn_mnist, final_weights, step_size=0.25, num_samples=200, x0_samples=10)
Free Doubling of points..
plt.scatter(
np.concatenate([forward_samples, backward_samples]),
np.concatenate([backward_samples, forward_samples]), s=2, marker='x')
print('total time: {}'.format(time.time() - start))
plt.axhline(0.0, color='gray', linestyle='--')
plt.axvline(0.0, color='gray', linestyle='--')
plt.xlabel(X_LABEL)
plt.ylabel(Y_LABEL)
plt.title(r'Fashion MNIST at final point, smaller alpha')
plt.tight_layout()
Explanation: footnote: We note that this final accuracy comes close the results of a multi-layer neural network (and better than logistic regression) to those published on the Authors benchmarking website.
End of explanation
def index_of_percentile(data, percentile, data_index=0):
Gets the index of the percentile in the data
Args:
data: A np.ndarray of shape (BATCH, ...)
percentile: The percenctile of the data you want.
data_index: An integer representing the index of `data` that you want to
slice.
Returns:
The index closest to the percentile of the data. When accessing data[index]
we retrieve the data at the `percentile`-th percentile.
percentile_value = np.percentile(
data[:, data_index],
percentile,
interpolation='nearest'
)
data_shifted = np.abs(data[:, data_index] - percentile_value)
return np.argmin(data_shifted)
def get_curvature_of_most_improvement_direction(
data_centered, percentile=90, data_index=0, curvature_data=None):
Get the curvature value for the direction which gives the most improvement.
Args:
data_centered: np.ndarray of containing the centered version of the data.
percentile: An integer value of the percentile of the data.
data_index: See `index_of_percentile`.
curvature_data: Precomputed curvature data.
Returns:
Returns the curvature value that corresponds to the relative change at
percentile `percentile`.
# Get the index of the data point at the percentile'd data.
closest_idx = index_of_percentile(data_centered, percentile, data_index)
if curvature_data is None:
# No precomputed curvature data.
# So project the centered data and get the curvature.
return scalar_project(
data_centered[closest_idx, :], CURVATURE_AX)
else:
# Curvature data was precomputed so just returned the curvature value
# corresponding to the index that is closest to the `percentile`th
# percentile.
return curvature_data[closest_idx]
Explanation: Tracking negative curvature
End of explanation |
15,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercici de navegació
<span title="Roomba navigating around furniture"><img src="img/roomba.jpg" align="right" width=200></span>
Un robot mòbil com el Roomba de la imatge ha d'evitar xocar amb els obstacles del seu entorn, i si arriba a col·lisionar, ha de reaccionar per a no fer, ni fer-se mal.
Amb el sensor de tacte no podem evitar el xoc, però si detectar-lo un cop es produeix, i reaccionar.
L'objectiu d'aquest exercici és programar el següent comportament en el robot
Step1: Versió 1.0
Utilitzeu el codi de l'exemple anterior del bucle while
Step2: Versió 2.0
Se suposa que la maniobra del robot li permet evitar l'obstacle, i per tant tornar a anar cap avant. Com ho podem programar?
Cal repetir tot el bloc d'instruccions del comportament, incloent el bucle. Cap problema, els llenguatges de programació permeten posar un bucle dins d'un altre, el que s'anomena bucles anidats.
Utilitzeu un bucle for (com el que vam vore a l'exercici del quadrat) per a repetir 5 vegades el codi anterior.
Step3: Versió 3.0
<img src="img/interrupt.png" align="right">
I si en lloc de repetir 5, 10 o 20 vegades, volem que el robot continue fins que el parem nosaltres? Ho podem fer amb un bucle infinit, i indicarem al programa que pare amb el botó interrupt kernel.
En Python, un bucle infinit s'escriu així
Step4: Versió 4.0
El comportament del robot, girant sempre cap al mateix costat, és una mica previsible, no vos sembla?
Anem a introduir un component d'atzar
Step5: La funció random és com llançar un dau, però en compte de donar una valor d'1 a 6, dóna un número real entre 0 i 1.
Aleshores, el robot pot utilitzar eixe valor per a decidir si gira a esquerra o dreta. Com? Doncs si el valor és major que 0.5, gira a un costat, i si no, cap a l'altre. Aleshores, girarà a l'atzar, amb una probabilitat del 50% per a cada costat.
Incorporeu la decisió a l'atzar per a girar al codi de la versió anterior
Step6: Recapitulem
Tot el que hem vist en aquest exercici | Python Code:
from functions import connect, touch, forward, backward, left, right, stop, disconnect, next_notebook
from time import sleep
connect()
Explanation: Exercici de navegació
<span title="Roomba navigating around furniture"><img src="img/roomba.jpg" align="right" width=200></span>
Un robot mòbil com el Roomba de la imatge ha d'evitar xocar amb els obstacles del seu entorn, i si arriba a col·lisionar, ha de reaccionar per a no fer, ni fer-se mal.
Amb el sensor de tacte no podem evitar el xoc, però si detectar-lo un cop es produeix, i reaccionar.
L'objectiu d'aquest exercici és programar el següent comportament en el robot:
mentre no detecte res, el robot va cap avant
després de xocar, el robot anirà cap enrere i girarà
Connecteu el robot:
End of explanation
while ___:
___
___
Explanation: Versió 1.0
Utilitzeu el codi de l'exemple anterior del bucle while: només heu d'afegir que, després del xoc, el robot vaja cap enrere, gire una mica (cap al vostre costat preferit), i pare.
End of explanation
for ___:
while ___:
___
___
Explanation: Versió 2.0
Se suposa que la maniobra del robot li permet evitar l'obstacle, i per tant tornar a anar cap avant. Com ho podem programar?
Cal repetir tot el bloc d'instruccions del comportament, incloent el bucle. Cap problema, els llenguatges de programació permeten posar un bucle dins d'un altre, el que s'anomena bucles anidats.
Utilitzeu un bucle for (com el que vam vore a l'exercici del quadrat) per a repetir 5 vegades el codi anterior.
End of explanation
try:
while True:
while ___:
___
___
except KeyboardInterrupt:
stop()
Explanation: Versió 3.0
<img src="img/interrupt.png" align="right">
I si en lloc de repetir 5, 10 o 20 vegades, volem que el robot continue fins que el parem nosaltres? Ho podem fer amb un bucle infinit, i indicarem al programa que pare amb el botó interrupt kernel.
En Python, un bucle infinit s'escriu així:
python
while True:
statement
Quan s'interromp el programa, s'abandona la instrucció que s'estava executant en eixe moment, i cal parar el robot. En Python, aquest procés s'anomena excepció i es gestiona d'aquesta manera:
python
try:
while True:
statement # ací anirà el comportament
except KeyboardInterrupt:
statement # ací pararem el robot
Utilitzeu un bucle infinit per a repetir el comportament del robot fins que el pareu.
End of explanation
from random import random
random()
Explanation: Versió 4.0
El comportament del robot, girant sempre cap al mateix costat, és una mica previsible, no vos sembla?
Anem a introduir un component d'atzar: en els llenguatges de programació, existeixen els generadors de números aleatoris, que són com els daus dels ordinadors.
Executeu el següent codi vàries vegades amb Ctrl+Enter i comproveu els resultats.
End of explanation
try:
while True:
while ___:
___
if ___:
___
else:
___
except KeyboardInterrupt:
stop()
Explanation: La funció random és com llançar un dau, però en compte de donar una valor d'1 a 6, dóna un número real entre 0 i 1.
Aleshores, el robot pot utilitzar eixe valor per a decidir si gira a esquerra o dreta. Com? Doncs si el valor és major que 0.5, gira a un costat, i si no, cap a l'altre. Aleshores, girarà a l'atzar, amb una probabilitat del 50% per a cada costat.
Incorporeu la decisió a l'atzar per a girar al codi de la versió anterior:
End of explanation
disconnect()
next_notebook('sound')
Explanation: Recapitulem
Tot el que hem vist en aquest exercici:
bucles anidats
excepcions
números aleatoris
No està malament, quasi hem vist el temari d'un primer curs de programació, i això només amb un sensor!
Passem a vore doncs el següent sensor.
Abans de continuar, desconnecteu el robot:
End of explanation |
15,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast Sign Adversary Generation Example
This notebook demos finds adversary examples using MXNet Gluon and taking advantage of the gradient information
[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv
Step1: Build simple CNN network for solving the MNIST dataset digit recognition task
Step2: Data Loading
Step3: Create the network
Step4: Initialize training
Step5: Training loop
Step6: Perturbation
We first run a validation batch and measure the resulting accuracy.
We then perturbate this batch by modifying the input in the opposite direction of the gradient.
Step7: Now we perturb the input
Step8: Visualization
Let's visualize an example after pertubation.
We can see that the prediction is often incorrect. | Python Code:
%matplotlib inline
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from mxnet import gluon
Explanation: Fast Sign Adversary Generation Example
This notebook demos finds adversary examples using MXNet Gluon and taking advantage of the gradient information
[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
https://arxiv.org/abs/1412.6572
End of explanation
ctx = mx.gpu() if len(mx.test_utils.list_gpus()) else mx.cpu()
batch_size = 128
Explanation: Build simple CNN network for solving the MNIST dataset digit recognition task
End of explanation
transform = lambda x,y: (x.transpose((2,0,1)).astype('float32')/255., y)
train_dataset = gluon.data.vision.MNIST(train=True).transform(transform)
test_dataset = gluon.data.vision.MNIST(train=False).transform(transform)
train_data = gluon.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=5)
test_data = gluon.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
Explanation: Data Loading
End of explanation
net = gluon.nn.HybridSequential()
with net.name_scope():
net.add(
gluon.nn.Conv2D(kernel_size=5, channels=20, activation='tanh'),
gluon.nn.MaxPool2D(pool_size=2, strides=2),
gluon.nn.Conv2D(kernel_size=5, channels=50, activation='tanh'),
gluon.nn.MaxPool2D(pool_size=2, strides=2),
gluon.nn.Flatten(),
gluon.nn.Dense(500, activation='tanh'),
gluon.nn.Dense(10)
)
Explanation: Create the network
End of explanation
net.initialize(mx.initializer.Uniform(), ctx=ctx)
net.hybridize()
loss = gluon.loss.SoftmaxCELoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1, 'momentum':0.95})
Explanation: Initialize training
End of explanation
epoch = 3
for e in range(epoch):
train_loss = 0.
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
with mx.autograd.record():
output = net(data)
l = loss(output, label)
l.backward()
trainer.update(data.shape[0])
train_loss += l.mean().asscalar()
acc.update(label, output)
print("Train Accuracy: %.2f\t Train Loss: %.5f" % (acc.get()[1], train_loss/(i+1)))
Explanation: Training loop
End of explanation
# Get a batch from the testing set
for data, label in test_data:
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
break
# Attach gradient to it to get the gradient of the loss with respect to the input
data.attach_grad()
with mx.autograd.record():
output = net(data)
l = loss(output, label)
l.backward()
acc = mx.metric.Accuracy()
acc.update(label, output)
print("Validation batch accuracy {}".format(acc.get()[1]))
Explanation: Perturbation
We first run a validation batch and measure the resulting accuracy.
We then perturbate this batch by modifying the input in the opposite direction of the gradient.
End of explanation
data_perturbated = data + 0.15 * mx.nd.sign(data.grad)
output = net(data_perturbated)
acc = mx.metric.Accuracy()
acc.update(label, output)
print("Validation batch accuracy after perturbation {}".format(acc.get()[1]))
Explanation: Now we perturb the input
End of explanation
from random import randint
idx = randint(0, batch_size-1)
plt.imshow(data_perturbated[idx, :].asnumpy().reshape(28,28), cmap=cm.Greys_r)
print("true label: %d" % label.asnumpy()[idx])
print("predicted: %d" % np.argmax(output.asnumpy(), axis=1)[idx])
Explanation: Visualization
Let's visualize an example after pertubation.
We can see that the prediction is often incorrect.
End of explanation |
15,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
기말고사 예비 문제집
Step1: 문제 1
아래 모양의 2차원 어레이를 작성하라.
단, 항목들을 일일이 입력하는 방식은 사용할 수 없다.
$$\left [
\begin{matrix}
1 & 6 & 11 \
2 & 7 & 12 \
3 & 8 & 13 \
4 & 9 & 14 \
5 & 10 & 15
\end{matrix}
\right ]
$$
2. 위 행렬에서 2번 및 4번 행만으로 구성된 새로운 2차원 어레이를 생성하라.
Step2: 문제 2
어레이 a가 다음과 같다.
a = np.arange(25).reshape((5,5))
어레이 a의 각 열을 아래의 어레이 b로 나누어라.
b = np.array([1., 5, 10, 15, 20])
힌트
Step3: 문제 3
먼저, 구간 [0, 1]에서 임의로 숫자를 뽑아 10 x 3 행렬 모양의 2차원 어레이를 생성하라.
생성된 어레이의 각 행에서 0.5에 가장 가까운 숫자를 확인하라.
힌트
Step4: 문제 4
아래 사이트에 가면 "Lena" 이름의 여성 사신에 대한 설명을 확인할 수 있다.
http
Step5: plt.imshow 함수를 이용하여 이미지를 확인할 수 있다.
Step6: 위 사진은 2차원 어레이 정보를 이용하므로 정확하지 않다.
흑백 사진으로 표현하고 싶으면 다음과 같이 할 수 있다.
Step7: 영역선택(crop) 기능을 이용하여 특정 영역을 확대해보자.
예를 들어 모든 테두리로부터 30픽셀씩 제거해보자.
픽셀 제거는 슬라이싱을 이용한다.
Step8: Lena의 얼굴 영역을 원으로 감싸보자.
원 바깥 부분은 검은색으로 처리한다.
목걸이 장식으로 사용할 로켓(locket) 처럼 처리하는 내용이다.
예를 들어 가운데 부분을 원으로 처리하면 아래처럼 보이게 할 수 있다.
<img src="images/lena-locket.png" width="300">
위 사진을 구현하는 코드를 작성하라.
힌트
Step9: 문제 5
1900년부터 1920년까지 캐나다 북부지역에서 산토끼, 스라소니, 당근의 개체수의 변화를 조사한 데이터가 아래와 같다.
Step10: data는 2차원 어레이이며 모양은 (21,4) 이다.
이제 전치를 이용하여 토끼, 스라소니, 당근의 21년 동안의 개체수를 담은 어레이를 아래와 같이 구할 수 있다.
Step11: 연도별 개체수의 변화를 확인하기 위해 그래프를 그려본다.
Step12: 각 종별로 평균 및 표준편차를 구하라.
각 종별로 가장 적은 개체수를 보였던 연도와 두 번째로 적었던 개체수를 보였던 연도를 구하라.
토끼 개체수의 변화와 스라소니 개체수 변화 사이의 상관관계를 설명하라. np.gradient 함수와 np.corrcoef 함수를 이용하라. | Python Code:
import numpy as np
Explanation: 기말고사 예비 문제집
End of explanation
a = np.arange(1, 16).reshape(3, 5).T
a
np.arange(1, 6)[:, np.newaxis] + np.arange(0, 11, 5)
Explanation: 문제 1
아래 모양의 2차원 어레이를 작성하라.
단, 항목들을 일일이 입력하는 방식은 사용할 수 없다.
$$\left [
\begin{matrix}
1 & 6 & 11 \
2 & 7 & 12 \
3 & 8 & 13 \
4 & 9 & 14 \
5 & 10 & 15
\end{matrix}
\right ]
$$
2. 위 행렬에서 2번 및 4번 행만으로 구성된 새로운 2차원 어레이를 생성하라.
End of explanation
a = np.arange(25).reshape((5,5))
a
b = np.array([1., 5, 10, 15, 20])
b
a/b[:, np.newaxis]
Explanation: 문제 2
어레이 a가 다음과 같다.
a = np.arange(25).reshape((5,5))
어레이 a의 각 열을 아래의 어레이 b로 나누어라.
b = np.array([1., 5, 10, 15, 20])
힌트: np.newaxis를 활용하라.
End of explanation
x = np.random.rand(10,3)
a= np.abs(x - 0.5)
b = a.argsort()
b
e = b[:, 0]
f = np.tile(e[:, np.newaxis], 3)
f
x[np.arange(10), e]
g = np.tile(np.arange(3), (10,1))
g
h = g == f
h
x[h]
x
Explanation: 문제 3
먼저, 구간 [0, 1]에서 임의로 숫자를 뽑아 10 x 3 행렬 모양의 2차원 어레이를 생성하라.
생성된 어레이의 각 행에서 0.5에 가장 가까운 숫자를 확인하라.
힌트:
np.abs와 np.argsort를 이용하여 각 행별로 0.5에 가장 가깝게 위치한
숫자의 위치를 확인하라.
팬시 인덱싱을 활용하여 행별로 0.5에 가장 가까운 숫자를 구할 수 있다.
End of explanation
from scipy import misc
import matplotlib.pylab as plt
%pylab inline
lena = misc.lena()
lena
Explanation: 문제 4
아래 사이트에 가면 "Lena" 이름의 여성 사신에 대한 설명을 확인할 수 있다.
http://www.cs.cmu.edu/~chuck/lennapg/
위 사이트에서 확인되는 사진은 이미지 압축 알고리즘과 관련해서 가장 많이 사용되는 사진이다.
파이썬에서는 scipy 모듈에서 2차원 어레이 형식으로 Lena 사진 데이터를 제공한다.
End of explanation
plt.imshow(lena)
Explanation: plt.imshow 함수를 이용하여 이미지를 확인할 수 있다.
End of explanation
plt.imshow(lena, cmap='gray')
Explanation: 위 사진은 2차원 어레이 정보를 이용하므로 정확하지 않다.
흑백 사진으로 표현하고 싶으면 다음과 같이 할 수 있다.
End of explanation
crop_lena = lena[100:-100, 100:-100]
plt.imshow(crop_lena, cmap=plt.cm.gray)
lena.shape
Explanation: 영역선택(crop) 기능을 이용하여 특정 영역을 확대해보자.
예를 들어 모든 테두리로부터 30픽셀씩 제거해보자.
픽셀 제거는 슬라이싱을 이용한다.
End of explanation
x = np.arange(512)
y = np.arange(512)[:, np.newaxis]
mask = (x - 256)**2 + (y - 256)** 2 > 230**2
mask[230, 430]
lena[mask]=0
plt.imshow(lena, cmap='gray')
Explanation: Lena의 얼굴 영역을 원으로 감싸보자.
원 바깥 부분은 검은색으로 처리한다.
목걸이 장식으로 사용할 로켓(locket) 처럼 처리하는 내용이다.
예를 들어 가운데 부분을 원으로 처리하면 아래처럼 보이게 할 수 있다.
<img src="images/lena-locket.png" width="300">
위 사진을 구현하는 코드를 작성하라.
힌트:
먼저, lena 어레이는 (512, 512) 모양의 2차원 어레이임을 확인한다.
마스크와 팬시 인덱싱을 사용한다.
512 x 512 크기의 2차원 어레이에서 특정 반지름의 원 밖을 나타내는
영역을 마스킹하는 마스크를 작성해야 한다.
참고로, 반지름이 230인 원의 방정식은 다음과 같다.
(x - center_x)**2 + (y - center_y)**2 = 230**2
가로, 세로 512 x 512 크기의 격자판(grid)을 다루기 위해서는 np.ogrid 함수를 이용한다.
위 격자판의 중심에서의 거리가 230 이상인 항목의 값을 0으로 처리한다.
End of explanation
data = np.loadtxt('data/populations.txt')
data
Explanation: 문제 5
1900년부터 1920년까지 캐나다 북부지역에서 산토끼, 스라소니, 당근의 개체수의 변화를 조사한 데이터가 아래와 같다.
End of explanation
year, hares, lynxes, carrots = data.T
Explanation: data는 2차원 어레이이며 모양은 (21,4) 이다.
이제 전치를 이용하여 토끼, 스라소니, 당근의 21년 동안의 개체수를 담은 어레이를 아래와 같이 구할 수 있다.
End of explanation
plt.axes([0.2, 0.1, 0.5, 0.8])
plt.plot(year, hares, year, lynxes, year, carrots)
plt.legend(('Hare', 'Lynx', 'Carrot'), loc=(1.05, 0.5))
hare_grad = np.gradient(hares)
hare_grad
Explanation: 연도별 개체수의 변화를 확인하기 위해 그래프를 그려본다.
End of explanation
plt.plot(year, hare_grad, year, -lynxes)
Explanation: 각 종별로 평균 및 표준편차를 구하라.
각 종별로 가장 적은 개체수를 보였던 연도와 두 번째로 적었던 개체수를 보였던 연도를 구하라.
토끼 개체수의 변화와 스라소니 개체수 변화 사이의 상관관계를 설명하라. np.gradient 함수와 np.corrcoef 함수를 이용하라.
End of explanation |
15,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Spectral Features in Essentia
For classification, we're going to be using new features in our arsenal
Step1: This value is normalized between 0 and 1. If 0, then the centroid is at zero. If 1, then the centroid is all the way to the "right", i.e., equal to fs/2, the Nyquist frequency, or the highest frequency a digital signal can possibly have.
Here is a sanity check
Step2: essentia.standard.CentralMoments
The first step to computing the other three spectral moments (spread, skewness, and kurtosis) is to compute the central moments of a spectrum
Step3: essentia.standard.DistributionShape
To compute the spectral spread, skewness, and kurtosis, we use essentia.standard.DistributionShape | Python Code:
spectrum = ess.Spectrum()
centroid = ess.Centroid()
x = essentia.array(scipy.randn(1024))
X = spectrum(x)
spectral_centroid = centroid(X)
print spectral_centroid
Explanation: ← Back to Index
Spectral Features in Essentia
For classification, we're going to be using new features in our arsenal: spectral moments (centroid, bandwidth, skewness, kurtosis) and other spectral statistics.
[Moments](https://en.wikipedia.org/wiki/Moment_(mathematics) is a term used in physics and statistics. There are raw moments and central moments.
You are probably already familiar with two examples of moments: mean and variance. The first raw moment is known as the mean. The second central moment is known as the variance.
essentia.standard.Centroid
To compute the spectral centroid in Essentia, we will use essentia.standard.Centroid:
End of explanation
sum((X/sum(X))*numpy.linspace(0, 1, len(X)))
Explanation: This value is normalized between 0 and 1. If 0, then the centroid is at zero. If 1, then the centroid is all the way to the "right", i.e., equal to fs/2, the Nyquist frequency, or the highest frequency a digital signal can possibly have.
Here is a sanity check:
End of explanation
central_moments = ess.CentralMoments()
print central_moments(X)
Explanation: essentia.standard.CentralMoments
The first step to computing the other three spectral moments (spread, skewness, and kurtosis) is to compute the central moments of a spectrum:
End of explanation
distributionshape = ess.DistributionShape()
spectral_moments = distributionshape(central_moments(X))
print spectral_moments
Explanation: essentia.standard.DistributionShape
To compute the spectral spread, skewness, and kurtosis, we use essentia.standard.DistributionShape:
End of explanation |
15,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Data
Overview of today's topics
Step1: 1. Loading a shapefile or GeoPackage
Step2: 2. Loading a CSV file
Often, you won't have a shapefile or GeoPackage (which is explicitly spatial), but rather a CSV file which is implicitly spatial (contains lat-lng columns). If you're loading a CSV file (or other non-explicitly spatial file type) of lat-lng data
Step3: Always define the CRS if you are manually creating a GeoDataFrame! Earlier, when we loaded the shapefile, geopandas loaded the CRS from the shapefile itself. But our CSV file is not explicitly spatial and it contains no CRS data, so we have to tell it what it is. In our case, the CRS is EPSG
Step4: 3. Loading a raster
So far we've worked with vector data. We can also work with raster data. Raster datasets are grids of pixels, where each pixel has a value (or multiple values if multiple bands of data), while vector datasets contain geometry objects with attributes, where each geometry is represented by mathematical coordinates for points, lines, polygons, etc. "Raster is faster but vector is corrector."
Common raster data include
Step5: 4. Projection
Your datasets need to be in the same CRS if you want to work with them together. If they're not, then project one or more of them so they're in the same CRS.
Take note of the important difference here between setting a CRS (i.e., identifying a dataset's current CRS) and projecting to a CRS (i.e., mathematically transforming your coordinates from their current CRS to a different one). Projection lets you transform, for example, from lat-lng coordinates on the surface of the round Earth to a flat two-dimensional plane for mapping and analysis in intuitive units like meters.
Step6: Be careful
Step7: 5. Geometric operations
GIS and spatial analysis use common "computational geometry" operations like intersects, within, and dissolve.
Step8: Many spatial operations, such as intersects/within, scale in time complexity as a function of 1) the number of objects, and 2) the number of vertices in the reference polygon. Using a simplified reference polygon, such as a bounding box, can drastically speed up your operation at the cost of imprecision. In this case, our raster is already approximately square, and we don't need to do precise matching, so let's use the bounding box for intersects/within to filter our tracts and businesses to those that lie within the area covered by our elevation data.
Step9: 6. Spatial join
Joins two geodataframes based on some shared spatial location.
Step10: 6a. How hilly is it around the stations?
Step11: 6b. Which stations have the most businesses in their catchment areas?
Step12: Beware artificial peripheries! Some station buffers extend beyond the spatially-cropped business locations. How would you fix this?
Step13: This works ok as a quick and dirty way to visually inspect our results. But it only works because we're analyzing/visualizing counts across study sites (i.e., station buffers) that are all the same size as each other. If the study site sizes varied (such as tracts or counties), counts might be correlated with area! Then you're just visualizing which study sites are the largest. In such cases, make sure you normalize. For example, use densities instead of counts.
6c. Which tracts have the most businesses?
Step14: How about an interactive web map instead?
Step15: 7. Spatial Indexing
When you need to find which page a topic appears on in a book, do you search through every word, page by page, until you find it? When you need to find which polygon a point lies in, do you search through every polygon, one at a time, until you find it? Sometimes. But you can avoid that slow brute-force search if you use an index.
A spatial index such as an r-tree can drastically speed up spatial operations like intersects and joins. In computer science, a tree data structure represents parent and children objects like the branches of a tree. For example, a k-d tree lets you partition space for fast nearest-neighbor search. But an r-tree is particularly useful for finding what geometries intersect with some other geometry, such as point-in-polygon queries.
An r-tree represents individual objects and their bounding boxes ("r" is for "rectangle") as the lowest level of the spatial index. It then aggregates nearby objects and represents them with their aggregate bounding box in the next higher level of the index. At yet higher levels, the r-tree aggregates bounding boxes and represents them by their bounding box, iteratively, until everything is nested into one top-level bounding box.
To search, the r-tree takes a query box and, starting at the top level, sees which (if any) bounding boxes intersect it. It then expands each intersecting bounding box and sees which of the child bounding boxes inside it intersect the query box. This proceeds recursively until all intersecting boxes are searched down to the lowest level, and returns the matching objects from the lowest level.
Step16: We can break this out into a two-step process. First find approximate matches with spatial index, then precise matches from those approximate ones.
Step17: That was fast! And we're nearly there. We intersected the spatial index with the bounds of our polygon. This returns a set of possible matches. That is, there are no false negatives but there may be some false positives if an r-tree rectangle within the bounds contains some points outside the tracts' true borders.
Unfortunately, the heavy lifting remains in filtering down the possible matches within the bounds to figure out which are within the polygon itself. To identify the precise matches (those points exactly within our polygon), we intersect the possible matches with the polygon itself.
Step18: So, the r-tree lets us filter out ~90% of the points (from the rest of the county) nearly instantly, but then the final precise point-in-polygon search (i.e., of the remaining points within the station buffers' bounding box, which are within the station buffers themselves?) consumes nearly all the runtime. | Python Code:
import ast
import contextily as cx
import folium
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import rasterio
import rasterio.features
Explanation: Spatial Data
Overview of today's topics:
Working with shapefiles, GeoPackages, CSV files, and rasters
Projection
Geometric operations
Spatial joins
Web mapping
Spatial indexing
End of explanation
# tell geopandas to read a shapefile with its read_file() function, passing in the shapefile folder
# this produces a GeoDataFrame
gdf_tracts = gpd.read_file('../../data/tl_2020_06_tract/')
gdf_tracts.shape
# just like regular pandas, see the first 5 rows of the GeoDataFrame
# this is a shapefile of polygon geometries, that is, tract boundaries
gdf_tracts.head()
# rudimentary mapping is as easy as calling the GeoDataFrame's plot method
ax = gdf_tracts.plot()
# what is the CRS?
# this derives from the shapefile's .prj file
# always make sure the shapefile you load has prj info so you get a CRS attribute!
gdf_tracts.crs
# loading a GeoPackage works the same way
gdf_stations = gpd.read_file('../../data/rail_stations.gpkg')
gdf_stations.shape
gdf_stations.crs
Explanation: 1. Loading a shapefile or GeoPackage
End of explanation
# load business location data as a regular pandas dataframe
df = pd.read_csv('../../data/Listing_of_Active_Businesses.csv')
df.shape
# clean up the data (same code from the data cleaning lecture)
df.columns = df.columns.str.lower().str.replace(' ', '_').str.strip('_#')
df = df.set_index('location_account').sort_index()
df['location_start_date'] = pd.to_datetime(df['location_start_date'])
slicer = pd.IndexSlice[:, 'business_name':'mailing_city']
df.loc[slicer] = df.loc[slicer].apply(lambda col: col.str.title(), axis='rows')
mask = pd.notnull(df['location'])
latlng = df.loc[mask, 'location'].map(ast.literal_eval)
df.loc[mask, ['lat', 'lng']] = pd.DataFrame(latlng.to_list(), index=latlng.index, columns=['lat', 'lng'])
df = df.drop(columns=['location']).dropna(subset=['lat', 'lng'])
# examine first five rows
df.head()
# create a geopandas geodataframe from the pandas dataframe
gdf_business = gpd.GeoDataFrame(df)
gdf_business.shape
# create a geometry column to contain shapely geometry for geopandas to use
# notice the shapely points are lng, lat so that they are equivalent to x, y
# also notice that we set the CRS explicitly
gdf_business['geometry'] = gpd.points_from_xy(x=gdf_business['lng'],
y=gdf_business['lat'])
gdf_business.crs = 'epsg:4326'
gdf_business.shape
Explanation: 2. Loading a CSV file
Often, you won't have a shapefile or GeoPackage (which is explicitly spatial), but rather a CSV file which is implicitly spatial (contains lat-lng columns). If you're loading a CSV file (or other non-explicitly spatial file type) of lat-lng data:
first load the CSV file as a DataFrame the usual way with pandas
then create a new geopandas GeoDataFrame from your DataFrame
manually create a geometry column
set the CRS
End of explanation
gdf_business.head()
# what's the CRS
gdf_business.crs
Explanation: Always define the CRS if you are manually creating a GeoDataFrame! Earlier, when we loaded the shapefile, geopandas loaded the CRS from the shapefile itself. But our CSV file is not explicitly spatial and it contains no CRS data, so we have to tell it what it is. In our case, the CRS is EPSG:4326, which is WGS84 lat-lng data, such as for GPS. Your data source should always tell you what CRS their coordinates are in. If they don't, ask! Don't just guess.
End of explanation
# load the raster file and view its band count, pixel width and height, null value, and geographic bounds
raster = rasterio.open('../../data/la-elevation.tif')
print(raster.count, raster.width, raster.height)
print(raster.nodata)
print(raster.bounds)
# view the raster data
df = pd.DataFrame(raster.read(1))
df
# histogram of elevations (meters above sea level) around downtown LA
ax = df[df!=raster.nodata].stack().hist(bins=50)
# get shapes representing groups of adjacent pixels with same values
# affine transformation maps pixel row/col -> spatial x/y
shapes = rasterio.features.shapes(source=raster.read(1),
transform=raster.transform)
# convert raster to GeoJSON-like vector features and create a gdf from them
# pro-tip: use generator comprehension for memory efficiency
features = ({'geometry': polygon, 'properties': {'elevation': value}} for polygon, value in shapes)
gdf_srtm = gpd.GeoDataFrame.from_features(features, crs=raster.crs)
# drop any null rows
gdf_srtm = gdf_srtm[gdf_srtm['elevation']!=raster.nodata]
gdf_srtm.shape
# view the gdf
gdf_srtm
# check its crs
gdf_srtm.crs
# plot the elevation pixels and identify pershing square
fig, ax = plt.subplots(facecolor='#111111')
ax = gdf_srtm.plot(ax=ax, column='elevation', cmap='inferno')
_ = ax.axis('off')
_ = ax.scatter(y=34.048097, x=-118.253233, c='w', marker='x', s=100)
# now it's your turn
# change the colors and also show the location of city hall on the map
Explanation: 3. Loading a raster
So far we've worked with vector data. We can also work with raster data. Raster datasets are grids of pixels, where each pixel has a value (or multiple values if multiple bands of data), while vector datasets contain geometry objects with attributes, where each geometry is represented by mathematical coordinates for points, lines, polygons, etc. "Raster is faster but vector is corrector."
Common raster data include:
- tree cover
- urbanization footprints
- land use
- elevation
In this example we load the SRTM 30m elevation raster, downloaded from https://dwtkns.com/srtm30m/, and cropped (to make a small dataset that can fit in laptop memory) via raster-crop-bbox.ipynb
End of explanation
# check if all our datasets have the same CRS
gdf_tracts.crs == gdf_stations.crs == gdf_business.crs == gdf_srtm.crs
# project them all to UTM zone 11N (see http://epsg.io/32611)
utm_crs = 'epsg:32611'
gdf_tracts = gdf_tracts.to_crs(utm_crs)
gdf_stations = gdf_stations.to_crs(utm_crs)
gdf_business = gdf_business.to_crs(utm_crs)
gdf_srtm = gdf_srtm.to_crs(utm_crs)
# check if all our datasets have the same CRS
gdf_tracts.crs == gdf_stations.crs == gdf_business.crs == gdf_srtm.crs
Explanation: 4. Projection
Your datasets need to be in the same CRS if you want to work with them together. If they're not, then project one or more of them so they're in the same CRS.
Take note of the important difference here between setting a CRS (i.e., identifying a dataset's current CRS) and projecting to a CRS (i.e., mathematically transforming your coordinates from their current CRS to a different one). Projection lets you transform, for example, from lat-lng coordinates on the surface of the round Earth to a flat two-dimensional plane for mapping and analysis in intuitive units like meters.
End of explanation
# now it's your turn
# pick a different CRS and re-project the data to it
Explanation: Be careful: heed the difference between the gdf.crs attribute and the gdf.to_crs() method. The former is the geodataframe's current CRS, whereas the latter projects the geodataframe to a new CRS.
End of explanation
%%time
# takes a few seconds...
# dissolve lets you aggregate (merge geometries together) by shared attribute values
# this is the spatial equivalent of pandas's groupby function
gdf_counties = gdf_tracts.dissolve(by='COUNTYFP', aggfunc=np.sum)
# now that we've dissolved tracts -> counties and summed their attributes,
# plot the counties by land area
fig, ax = plt.subplots(facecolor='#111111')
ax = gdf_counties.plot(ax=ax, column='ALAND', cmap='Blues_r')
_ = ax.axis('off')
# just like in regular pandas, we can filter and subset the GeoDataFrame
# retain only tracts in LA county (FIPS code 037)
mask = gdf_tracts['COUNTYFP'] == '037'
gdf_tracts_la = gdf_tracts[mask]
ax = gdf_tracts_la.plot()
# discard the channel islands' tracts to retain only the mainland
# how? sort by centroids' y-coord and discard the two southern-most
labels = gdf_tracts_la.centroid.y.sort_values().iloc[2:].index
gdf_tracts_la = gdf_tracts_la.loc[labels]
ax = gdf_tracts_la.plot()
# unary union merges all geometries in gdf into one
la_geom = gdf_tracts_la.unary_union
la_geom
# convex hull generates the minimal convex polygon around feature(s)
la_geom.convex_hull
# envelope generates the minimal rectangular polygon around feature(s)
la_geom.envelope
# get a bounding box around our elevation data
elev_bounds = gdf_srtm.unary_union.envelope
Explanation: 5. Geometric operations
GIS and spatial analysis use common "computational geometry" operations like intersects, within, and dissolve.
End of explanation
# get all the tracts that intersect those bounds
# intersects tells you if each geometry in one dataset intersects with some other (single) geometry
mask = gdf_tracts_la.intersects(elev_bounds)
gdf_tracts_dtla = gdf_tracts_la[mask]
gdf_tracts_dtla.shape
# get all business points within those bounds
# within tells you if each geometry in one dataset is within some other (single) geometry
mask = gdf_business.within(elev_bounds)
gdf_business_dtla = gdf_business[mask]
gdf_business_dtla.shape
# euclidean buffers let you analyze the area around features (use projected CRS!)
# buffer the rail stations by a half km (5-10 minute walk)
gdf_stations['geometry'] = gdf_stations.buffer(500)
fig, ax = plt.subplots(figsize=(8, 8), facecolor='#111111')
ax = gdf_tracts_dtla.plot(ax=ax, color='k')
ax = gdf_stations.plot(ax=ax, color='w', alpha=0.3)
ax = gdf_business_dtla.plot(ax=ax, color='#ffff66', marker='.', linewidth=0, markersize=20, alpha=0.05)
_ = ax.axis('off')
# you can do set operations like union, intersection, and difference
# get all the portions of tracts >0.5km from a rail station
gdf_diff = gpd.overlay(gdf_tracts_dtla, gdf_stations, how='difference')
ax = gdf_diff.plot()
Explanation: Many spatial operations, such as intersects/within, scale in time complexity as a function of 1) the number of objects, and 2) the number of vertices in the reference polygon. Using a simplified reference polygon, such as a bounding box, can drastically speed up your operation at the cost of imprecision. In this case, our raster is already approximately square, and we don't need to do precise matching, so let's use the bounding box for intersects/within to filter our tracts and businesses to those that lie within the area covered by our elevation data.
End of explanation
# remember (again): always double-check CRS before any spatial operations
gdf_tracts_dtla.crs == gdf_stations.crs == gdf_business.crs == gdf_srtm.crs
Explanation: 6. Spatial join
Joins two geodataframes based on some shared spatial location.
End of explanation
# join stations to elevation data
gdf = gpd.sjoin(gdf_srtm, gdf_stations, how='inner', op='intersects')
# counts vary because these aren't elevation pixels, but regions of same value
gdf_elev_desc = gdf.groupby('name')['elevation'].describe().astype(int)
gdf_elev_desc
gdf_stations_elev = gdf_stations.merge(gdf_elev_desc, left_on='name', right_index=True)
gdf_stations_elev.head()
# which stations have the greatest elevation variation around them?
ax = gdf_stations_elev.plot(column='std')
# now it's your turn
# which station buffer covers the largest elevation range?
Explanation: 6a. How hilly is it around the stations?
End of explanation
# join stations to businesses data
gdf = gpd.sjoin(gdf_business, gdf_stations, how='inner', op='intersects')
# counts vary because these aren't elevation pixels, but regions of same value
gdf_business_desc = gdf.groupby('name').size().sort_values(ascending=False)
gdf_business_desc.name = 'count'
gdf_business_desc
Explanation: 6b. Which stations have the most businesses in their catchment areas?
End of explanation
# now it's your turn
# change earlier parts of the notebook to make sure our station buffers capture all the businesses within them
gdf_stations_business = gdf_stations.merge(gdf_business_desc, left_on='name', right_index=True)
# which stations have the most businesses around them?
ax = gdf_stations_business.plot(column='count')
Explanation: Beware artificial peripheries! Some station buffers extend beyond the spatially-cropped business locations. How would you fix this?
End of explanation
# join tracts to business data
gdf = gpd.sjoin(gdf_business, gdf_tracts_dtla, how='inner', op='intersects')
# count businesses per tract
counts = gdf.groupby('GEOID').size()
counts.name = 'count'
# merge in the counts then calculate density (businesses per km^2)
gdf_tracts_dtla_business = gdf_tracts_dtla.merge(counts, left_on='GEOID', right_index=True)
gdf_tracts_dtla_business['density'] = gdf_tracts_dtla_business['count'] / gdf_tracts_dtla_business['ALAND'] * 1e6
# plot tracts as choropleth plus station buffers
fig, ax = plt.subplots(figsize=(8, 8), facecolor='#111111')
ax = gdf_tracts_dtla_business.plot(ax=ax, column='density', cmap='viridis')
ax = gdf_stations.plot(ax=ax, alpha=0.2, linewidth=3, edgecolor='w', color='none')
_ = ax.axis('off')
# this time, let's add a basemap for context
fig, ax = plt.subplots(figsize=(8, 8), facecolor='#111111')
ax = gdf_tracts_dtla_business.plot(ax=ax, column='density', cmap='viridis',
alpha=0.7, linewidth=0.3, edgecolor='k')
ax = gdf_stations.plot(ax=ax, alpha=0.3, linewidth=3, edgecolor='w', color='none')
_ = ax.axis('off')
# add the basemap with contextily, choosing a tile provider
# or try cx.providers.Stamen.TonerBackground, etc
cx.add_basemap(ax, crs=gdf_stations.crs.to_string(),
source=cx.providers.CartoDB.DarkMatter)
ax.figure.savefig('map.png', dpi=600, bbox_inches='tight')
# now it's your turn
# change the tile provider, the tract colors, the alphas, etc to find a plot your like
Explanation: This works ok as a quick and dirty way to visually inspect our results. But it only works because we're analyzing/visualizing counts across study sites (i.e., station buffers) that are all the same size as each other. If the study site sizes varied (such as tracts or counties), counts might be correlated with area! Then you're just visualizing which study sites are the largest. In such cases, make sure you normalize. For example, use densities instead of counts.
6c. Which tracts have the most businesses?
End of explanation
# optionally bin the data into quintiles
bins = list(gdf_tracts_dtla_business['density'].quantile([0, 0.2, 0.4, 0.6, 0.8, 1]))
# create leaflet choropleth web map
m = folium.Map(location=(34.047223, -118.253555), zoom_start=15, tiles='cartodbdark_matter')
c = folium.Choropleth(geo_data=gdf_tracts_dtla_business,
data=gdf_tracts_dtla_business,
#bins=bins,
columns=['GEOID', 'density'],
key_on='feature.properties.GEOID',
highlight=True,
fill_color='YlOrRd_r',
legend_name='Businesses per square km').add_to(m)
# add mouseover tooltip to the countries
c.geojson.add_child(folium.features.GeoJsonTooltip(['GEOID', 'density']))
# save web map to disk
m.save('webmap.html')
m
# now it's your turn
# try binning the data in different ways. how would you do it?
# try changing the colors, basemap, and what variable you're visualizing
Explanation: How about an interactive web map instead?
End of explanation
# geopandas uses r-tree spatial indexes
# if a spatial index doesn't already exist,
# it will be created the first time the sindex attribute is accessed
sindex = gdf_business.sindex
%%time
# count all the businesses within the station buffers
polygon = gdf_stations.unary_union
gdf_business.within(polygon).sum()
Explanation: 7. Spatial Indexing
When you need to find which page a topic appears on in a book, do you search through every word, page by page, until you find it? When you need to find which polygon a point lies in, do you search through every polygon, one at a time, until you find it? Sometimes. But you can avoid that slow brute-force search if you use an index.
A spatial index such as an r-tree can drastically speed up spatial operations like intersects and joins. In computer science, a tree data structure represents parent and children objects like the branches of a tree. For example, a k-d tree lets you partition space for fast nearest-neighbor search. But an r-tree is particularly useful for finding what geometries intersect with some other geometry, such as point-in-polygon queries.
An r-tree represents individual objects and their bounding boxes ("r" is for "rectangle") as the lowest level of the spatial index. It then aggregates nearby objects and represents them with their aggregate bounding box in the next higher level of the index. At yet higher levels, the r-tree aggregates bounding boxes and represents them by their bounding box, iteratively, until everything is nested into one top-level bounding box.
To search, the r-tree takes a query box and, starting at the top level, sees which (if any) bounding boxes intersect it. It then expands each intersecting bounding box and sees which of the child bounding boxes inside it intersect the query box. This proceeds recursively until all intersecting boxes are searched down to the lowest level, and returns the matching objects from the lowest level.
End of explanation
len(gdf_business)
%%time
# get the positions of possible matches, to use with iloc
positions = sindex.intersection(polygon.bounds)
possible_matches = gdf_business.iloc[positions]
len(possible_matches)
Explanation: We can break this out into a two-step process. First find approximate matches with spatial index, then precise matches from those approximate ones.
End of explanation
%%time
mask = possible_matches.intersects(polygon)
precise_matches = possible_matches[mask]
len(precise_matches)
Explanation: That was fast! And we're nearly there. We intersected the spatial index with the bounds of our polygon. This returns a set of possible matches. That is, there are no false negatives but there may be some false positives if an r-tree rectangle within the bounds contains some points outside the tracts' true borders.
Unfortunately, the heavy lifting remains in filtering down the possible matches within the bounds to figure out which are within the polygon itself. To identify the precise matches (those points exactly within our polygon), we intersect the possible matches with the polygon itself.
End of explanation
# what false positives appeared among the possible matches?
labels = possible_matches.index.difference(precise_matches.index)
false_positives = possible_matches.loc[labels]
# visualize the precise matches vs the false positives
ax = gdf_stations.plot(color='gray')
ax = false_positives.plot(ax=ax, c='r', markersize=0.1)
ax = precise_matches.plot(ax=ax, c='b', markersize=0.1)
_ = ax.axis('off')
Explanation: So, the r-tree lets us filter out ~90% of the points (from the rest of the county) nearly instantly, but then the final precise point-in-polygon search (i.e., of the remaining points within the station buffers' bounding box, which are within the station buffers themselves?) consumes nearly all the runtime.
End of explanation |
15,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some sources
Step1: Run model
Step2: Questions
Step3: Now let's look at the variations of the respective clusters, nmf topic and epithets
Question
Step4: Visualize topic clusters
Step5: Kmeans based tfidf matrix
Step6: Kmeans based on nmf | Python Code:
import datetime as dt
import os
import time
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_index
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithets
from cltk.corpus.greek.tlg.parse_tlg_indices import select_authors_by_epithet
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_of_author
from cltk.corpus.greek.tlg.parse_tlg_indices import get_id_author
from cltk.stop.greek.stops import STOPS_LIST as greek_stops
from cltk.tokenize.word import nltk_tokenize_words
from greek_accentuation.characters import base
import pandas # pip install pandas
from sklearn.decomposition import NMF # pip install scikit-learn scipy
from sklearn.externals import joblib
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
def stream_lemmatized_files(corpus_dir, reject_none_epithet=False, reject_chars_less_than=None):
# return all docs in a dir; parameters for removing by None epithet and short texts
user_dir = os.path.expanduser('~/cltk_data/user_data/' + corpus_dir)
files = os.listdir(user_dir)
map_id_author = get_id_author()
for file in files:
filepath = os.path.join(user_dir, file)
file_id = file[3:-4]
author = map_id_author[file_id]
if reject_none_epithet:
# get id numbers and then epithets of each author
author_epithet = get_epithet_of_author(file_id)
if not author_epithet:
continue
with open(filepath) as fo:
text = fo.read()
if reject_chars_less_than:
if len(text) < reject_chars_less_than:
continue
yield file_id, author, text
t0 = dt.datetime.utcnow()
id_author_text_list = []
for tlg_id, author, text in stream_lemmatized_files('tlg_lemmatized_no_accents_no_stops',
reject_none_epithet=True,
reject_chars_less_than=500):
id_author_text_list.append((tlg_id, author, text))
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
print('Number of texts:', len(id_author_text_list))
# view all epithets:
get_epithets()
t0 = dt.datetime.utcnow()
# tf-idf features
n_samples = 2000
n_features = 1000 # TODO: increase
n_topics = len(get_epithets()) # 55
n_top_words = 20
tfidf_vectorizer = TfidfVectorizer(max_df=1.0,
min_df=1,
max_features=n_features,
stop_words=None)
texts_list = [t[2] for t in id_author_text_list]
tfidf = tfidf_vectorizer.fit_transform(texts_list)
# save features
vector_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_set_reduction_tfidf_{0}features.pickle'.format(n_features))
joblib.dump(tfidf, vector_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
# time on good server:
# 1000 features: 0:01:22
Explanation: Some sources:
Gensim LDA: https://radimrehurek.com/gensim/models/ldamodel.html
Misc clustering with Python: http://brandonrose.org/clustering
Scikit LDA: http://scikit-learn.org/0.16/modules/generated/sklearn.lda.LDA.html
Scikit NMF: http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html
WMD in Python: http://vene.ro/blog/word-movers-distance-in-python.html
Original WMD paper: http://jmlr.org/proceedings/papers/v37/kusnerb15.pdf
Make word-doc matrix
End of explanation
t0 = dt.datetime.utcnow()
print("Fitting the NMF model with tf-idf features, "
"n_samples=%d and n_features=%d..."
% (n_samples, n_features))
nmf = NMF(n_components=n_topics, random_state=1,
alpha=.1, l1_ratio=.5).fit(tfidf)
# save model
nmf_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_set_reduction_tfidf_{0}features_nmf.pickle'.format(n_features))
joblib.dump(nmf, nmf_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
def print_top_words(model, feature_names, n_top_words):
for topic_id, topic in enumerate(model.components_):
print('Topic #{}:'.format(int(topic_id)))
print(''.join([feature_names[i] + ' ' + str(round(topic[i], 2))
+' | ' for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
print("Topics in NMF model:")
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
print_top_words(nmf, tfidf_feature_names, n_top_words)
tfidf.shape
doc_topic_distrib = nmf.transform(tfidf) # numpy.ndarray
doc_topic_distrib.shape
df = pandas.DataFrame(doc_topic_distrib)
len(id_author_text_list)
authors_in_order = {index:_tuple[1] for index, _tuple in enumerate(id_author_text_list)}
print(len(authors_in_order))
df = df.rename(authors_in_order)
Explanation: Run model
End of explanation
df
# for each topic (col), which author (row) has the highest value?
# TODO: get top 5 authors
for count in range(n_topics):
print('Top author of topic {0}: {1}'.format(count, df[count].idxmax()))
# Now, transpose df and get top topic of each author
# for each topic (col), which author (row) has the highest value?
# TODO: get top 5 authors
df_t = df.T
#df_t.head(10)
df_t
map_name_epithet_id = {}
for curr_name in df_t.columns:
print(curr_name)
try:
top_topic = int(df_t[curr_name].idxmax())
except TypeError: # there are some duplicate names, just take one value for now
top_topic = int(df_t[curr_name].idxmax().iloc[0])
print(' NMF topic:', top_topic)
for _id, name in get_id_author().items():
if curr_name == name:
epithet = get_epithet_of_author(_id)
print(' Traditional epithet:', epithet)
map_name_epithet_id[name] = {'id': _id,
'top_topic': top_topic,
'epithet': epithet}
print()
Explanation: Questions:
to what topic does each author most belong? (and how to determine cutoff?)
what authors most exemplify a topic?
End of explanation
# Group by epithet, collect topics
# {<epithet>: [<topics>]}
from collections import defaultdict
map_epithet_topics = defaultdict(list)
for name, _dict in map_name_epithet_id.items():
epithet = _dict['epithet']
top_topic = _dict['top_topic']
map_epithet_topics[epithet].append(top_topic)
# import pprint
# pp = pprint.PrettyPrinter(indent=4)
# pp.pprint(dict(map_epithet_topics))
print(dict(map_epithet_topics))
# which epithet has the most topics associated with it?
# That is, what epithet is most topic-lexical diverse?
#? Perhaps do simple lex diversity count
map_epithet_count_topics = {}
for epithet, topic_list in map_epithet_topics.items():
map_epithet_count_topics[epithet] = len(topic_list)
sorted(map_epithet_count_topics.items(), key=lambda x:x[1], reverse=True)
# Group by topic, collect epithets
# {<topic>: [<epithets>]}
from collections import defaultdict
map_topic_epithets = defaultdict(list)
for name, _dict in map_name_epithet_id.items():
epithet = _dict['epithet']
top_topic = _dict['top_topic']
map_topic_epithets[top_topic].append(epithet)
dict(map_topic_epithets)
# least, most cohesive epithets
# which epithet has the most topics associated with it?
map_topics_count_epithet = {}
for topic, epithet_list in map_topic_epithets.items():
map_topics_count_epithet[topic] = len(epithet_list)
# map_topics_count_epithet
sorted_list_tuples = sorted(map_topics_count_epithet.items(), key=lambda x:x[1], reverse=True)
for topic_freq in sorted_list_tuples:
topic_number = str(topic_freq[0])
doc_freq = str(topic_freq[1])
print('Topic #{0} has {1} author-documents in it'.format(topic_number, doc_freq))
# http://scikit-learn.org/stable/modules/clustering.html
dataset_array = df.values
print(dataset_array.dtype) # kmeans needs to be homogeneous data type (here, float64)
print(dataset_array)
# do I need to normalize
# sklearn.preprocessing.StandardScaler
from sklearn import preprocessing
# http://scikit-learn.org/stable/modules/preprocessing.html
# first load scaler and train on given data set
scaler = preprocessing.StandardScaler().fit(df)
scaler.mean_
scaler.scale_
t0 = dt.datetime.utcnow()
# actually do normalization; can be reused for eg a training set
df_scaled = pandas.DataFrame(scaler.transform(df))
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
Explanation: Now let's look at the variations of the respective clusters, nmf topic and epithets
Question: Which topics are found within each epithet?
Question: Which epithets are found within each topic? And how many?
End of explanation
from sklearn import cluster
# Convert DataFrame to matrix (numpy.ndarray)
matrix = df_scaled.as_matrix()
km = cluster.KMeans(n_clusters=n_topics)
km.fit(matrix)
# Get cluster assignment labels
labels = km.labels_ # these are the topics 0-54; array([53, 53, 16, ..., 42, 16, 13]
# Format results as a DataFrame
df_clusters = pandas.DataFrame([df_scaled.index, labels]).T # add author names to the 0 col
df_clusters.head(5)
%matplotlib inline
import matplotlib.pyplot as plt # pip install matplotlib
import matplotlib
matplotlib.style.use('ggplot')
# from pandas.tools.plotting import table
# this is a cluseter of the already-clustered kmeans topics; not very informative
plt.figure()
df_clusters.plot.scatter(x=0, y=1) # y is topics no., x is doc id
Explanation: Visualize topic clusters
End of explanation
# try clustering the original tfidf
# tfidf_dense = tfidf.toarray()
scaler = preprocessing.StandardScaler(with_mean=False).fit(tfidf) # either with_mean=False or make dense
# save scaler
scaler_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_tfidf_{0}features_scaler.pickle'.format(n_features))
joblib.dump(df_scaled, scaler_fp)
import numpy as np
# direct Pandas conversion of sparse scipy matrix not supported
# Following http://stackoverflow.com/a/17819427
# df_scaled_tfidf = pandas.DataFrame(scaler.transform(tfidf))
# df_scaled_tfidf = pandas.DataFrame()
t0 = dt.datetime.utcnow()
scaler_tfidf = scaler.transform(tfidf) # sparse matrix of type '<class 'numpy.float64'>
pandas.SparseDataFrame([pandas.SparseSeries(scaler_tfidf[i].toarray().ravel()) for i in np.arange(scaler_tfidf.shape[0])])
df_scaled_tfidf = pandas.SparseDataFrame([pandas.SparseSeries(scaler_tfidf[i].toarray().ravel()) for i in np.arange(scaler_tfidf.shape[0])])
# type(df) # pandas.sparse.frame.SparseDataFrame
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
t0 = dt.datetime.utcnow()
# Convert DataFrame to matrix (numpy.ndarray)
matrix_tfidf = df_scaled_tfidf.as_matrix()
km_tfidf = cluster.KMeans(n_clusters=n_topics)
km_tfidf.fit(matrix_tfidf)
# Get cluster assignment labels
labels = km_tfidf.labels_ # these are the topics 0-54; array([53, 53, 16, ..., 42, 16, 13]
# Format results as a DataFrame
df_clusters_tfidf = pandas.DataFrame([df_scaled_tfidf.index, labels]).T # add author names to the 0 col
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
df_clusters_tfidf.head(10)
plt.figure()
df_clusters_tfidf.plot.scatter(x=0, y=1) # y is topics no., x is doc id
Explanation: Kmeans based tfidf matrix
End of explanation
nmf_array = nmf.components_
t0 = dt.datetime.utcnow()
# nmf_dense = nmf_array.toarray()
scaler = preprocessing.StandardScaler().fit(nmf_array) # either with_mean=False or make dense
# save features
tfidf_matrix_scaler_fp = os.path.expanduser('~/cltk_data/user_data/tlg_lemmatized_no_accents_no_stops_tfidf_matrix_{0}features.pickle'.format(n_features))
joblib.dump(scaler, tfidf_matrix_scaler_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
df_scaled_nmf = pandas.DataFrame(scaler.transform(nmf_array))
# Convert DataFrame to matrix (numpy.ndarray)
matrix_nmf = df_scaled_nmf.as_matrix()
km_nmf = cluster.KMeans(n_clusters=n_topics)
km_nmf.fit(matrix_nmf)
# Get cluster assignment labels
labels = km_nmf.labels_ # these are the clusters 0-54; array([ 1, 4, 11, 14, 28, 9, 30,
# Format results as a DataFrame
df_clusters_nmf = pandas.DataFrame([df_scaled_nmf.index, labels]).T # add author names to the 0 col
df_clusters_nmf.head(10)
plt.figure()
df_clusters_nmf.plot.scatter(x=0, y=1) #axis?
Explanation: Kmeans based on nmf
End of explanation |
15,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: If you remember, at the beginning of this book,
we saw a quote from John Quackenbush that essentially said
that the reason a graph is interesting is because of its edges.
In this chapter, we'll see this in action once again,
as we are going to figure out how to leverage the edges
to find special structures in a graph.
Triangles
The first structure that we are going to learn about is triangles.
Triangles are super interesting!
They are what one might consider to be
"the simplest complex structure" in a graph.
Triangles can also have semantically-rich meaning depending on the application.
To borrow a bad example, love triangles in social networks are generally frowned upon,
while on the other hand, when we connect two people that we know together,
we instead complete a triangle.
Load Data
To learn about triangles,
we are going to leverage a physician trust network.
Here's the data description
Step2: Exercise
Step3: Exercise
Step4: Now, test your implementation below!
The code cell will not error out if your answer is correct.
Step5: As you can see from the test function above,
NetworkX provides an nx.triangles(G, node) function.
It returns the number of triangles that a node is involved in.
We convert it to boolean as a hack to check whether or not
a node is involved in a triangle relationship
because 0 is equivalent to boolean False,
while any non-zero number is equivalent to boolean True.
Exercise
Step6: Triadic Closure
In professional circles, making connections between two people
is one of the most valuable things you can do professionally.
What you do in that moment is what we would call
triadic closure.
Algorithmically, we can do the same thing
if we maintain a graph of connections!
Essentially, what we are looking for
are "open" or "unfinished" triangles".
In this section, we'll try our hand at implementing
a rudimentary triadic closure system.
Exercise
Step7: Exercise
Step8: Exercise
Step9: Cliques
Triangles are interesting in a graph theoretic setting
because triangles are the simplest complex clique that exist.
But wait!
What is the definition of a "clique"?
A "clique" is a set of nodes in a graph
that are fully connected with one another
by edges between them.
Exercise
Step10: $k$-Cliques
Cliques are identified by their size $k$,
which is the number of nodes that are present in the clique.
A triangle is what we would consider to be a $k$-clique where $k=3$.
A square with cross-diagonal connections is what we would consider to be
a $k$-clique where $k=4$.
By now, you should get the gist of the idea.
Maximal Cliques
Related to this idea of a $k$-clique is another idea called "maximal cliques".
Maximal cliques are defined as follows
Step11: Exercise
Step12: Now, test your implementation against the test function below.
Step13: Clique Decomposition
One super neat property of cliques
is that every clique of size $k$
can be decomposed to the set of cliques of size $k-1$.
Does this make sense to you?
If not, think about triangles (3-cliques).
They can be decomposed to three edges (2-cliques).
Think again about 4-cliques.
Housed within 4-cliques are four 3-cliques.
Draw it out if you're still not convinced!
Exercise
Step14: Connected Components
Now that we've explored a lot around cliques,
we're now going to explore this idea of "connected components".
To do so, I am going to have you draw the graph
that we are working with.
Step15: Exercise
Step16: Defining connected components
From Wikipedia
Step17: Let's see how many connected component subgraphs are present
Step18: Exercise
Step19: Now, draw a CircosPlot with the node order and colouring
dictated by the subgraph key.
Step20: Using an arc plot will also clearly illuminate for us
that there are no inter-group connections.
Step21: Voila! It looks quite clear that there are indeed four disjoint group of physicians.
Solutions | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo(id="3DWSRCbPPJs", width="100%")
Explanation: Introduction
End of explanation
from nams import load_data as cf
G = cf.load_physicians_network()
Explanation: If you remember, at the beginning of this book,
we saw a quote from John Quackenbush that essentially said
that the reason a graph is interesting is because of its edges.
In this chapter, we'll see this in action once again,
as we are going to figure out how to leverage the edges
to find special structures in a graph.
Triangles
The first structure that we are going to learn about is triangles.
Triangles are super interesting!
They are what one might consider to be
"the simplest complex structure" in a graph.
Triangles can also have semantically-rich meaning depending on the application.
To borrow a bad example, love triangles in social networks are generally frowned upon,
while on the other hand, when we connect two people that we know together,
we instead complete a triangle.
Load Data
To learn about triangles,
we are going to leverage a physician trust network.
Here's the data description:
This directed network captures innovation spread among 246 physicians
for towns in Illinois, Peoria, Bloomington, Quincy and Galesburg.
The data was collected in 1966.
A node represents a physician and an edge between two physicians
shows that the left physician told that the right physician is his friend
or that he turns to the right physician if he needs advice
or is interested in a discussion.
There always only exists one edge between two nodes
even if more than one of the listed conditions are true.
End of explanation
from nams.solutions.structures import triangle_finding_strategies
# triangle_finding_strategies()
Explanation: Exercise: Finding triangles in a graph
This exercise is going to flex your ability
to "think on a graph", just as you did in the previous chapters.
Leveraging what you know, can you think of a few strategies
to find triangles in a graph?
End of explanation
def in_triangle(G, node):
# Your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import in_triangle
# UNCOMMENT THE NEXT LINE TO SEE MY ANSWER
# in_triangle??
Explanation: Exercise: Identify whether a node is in a triangle relationship or not
Let's now get down to implementing this next piece of code.
Write a function that identifies whether a node is or is not in a triangle relationship.
It should take in a graph G and a node n,
and return a boolean True if the node n is in any triangle relationship
and boolean False if the node n is not in any triangle relationship.
A hint that may help you:
Every graph object G has a G.has_edge(n1, n2) method that you can use to identify whether a graph has an edge between n1 and n2.
Also:
itertools.combinations lets you iterate over every K-combination of items in an iterable.
End of explanation
from random import sample
import networkx as nx
def test_in_triangle():
nodes = sample(G.nodes(), 10)
for node in nodes:
assert in_triangle(G, 3) == bool(nx.triangles(G, 3))
test_in_triangle()
Explanation: Now, test your implementation below!
The code cell will not error out if your answer is correct.
End of explanation
def get_triangle_neighbors(G, n):
# Your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import get_triangle_neighbors
# UNCOMMENT THE NEXT LINE TO SEE MY ANSWER
# get_triangle_neighbors??
def plot_triangle_relations(G, n):
# Your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import plot_triangle_relations
plot_triangle_relations(G, 3)
Explanation: As you can see from the test function above,
NetworkX provides an nx.triangles(G, node) function.
It returns the number of triangles that a node is involved in.
We convert it to boolean as a hack to check whether or not
a node is involved in a triangle relationship
because 0 is equivalent to boolean False,
while any non-zero number is equivalent to boolean True.
Exercise: Extract triangles for plotting
We're going to leverage another piece of knowledge that you already have:
the ability to extract subgraphs.
We'll be plotting all of the triangles that a node is involved in.
Given a node, write a function that extracts out
all of the neighbors that it is in a triangle relationship with.
Then, in a new function,
implement code that plots only the subgraph
that contains those nodes.
End of explanation
from nams.solutions.structures import triadic_closure_algorithm
# UNCOMMENT FOR MY ANSWER
# triadic_closure_algorithm()
Explanation: Triadic Closure
In professional circles, making connections between two people
is one of the most valuable things you can do professionally.
What you do in that moment is what we would call
triadic closure.
Algorithmically, we can do the same thing
if we maintain a graph of connections!
Essentially, what we are looking for
are "open" or "unfinished" triangles".
In this section, we'll try our hand at implementing
a rudimentary triadic closure system.
Exercise: Design the algorithm
What graph logic would you use to identify triadic closure opportunities?
Try writing out your general strategy, or discuss it with someone.
End of explanation
def get_open_triangles_neighbors(G, n):
# Your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import get_open_triangles_neighbors
# UNCOMMENT THE NEXT LINE TO SEE MY ANSWER
# get_open_triangles_neighbors??
Explanation: Exercise: Implement triadic closure.
Now, try your hand at implementing triadic closure.
Write a function that takes in a graph G and a node n,
and returns all of the neighbors that are potential triadic closures
with n being the center node.
End of explanation
def plot_open_triangle_relations(G, n):
# Your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import plot_open_triangle_relations
plot_open_triangle_relations(G, 3)
Explanation: Exercise: Plot the open triangles
Now, write a function that takes in a graph G and a node n,
and plots out that node n and all of the neighbors
that it could help close triangles with.
End of explanation
from nams.solutions.structures import simplest_clique
# UNCOMMENT THE NEXT LINE TO SEE MY ANSWER
# simplest_clique()
Explanation: Cliques
Triangles are interesting in a graph theoretic setting
because triangles are the simplest complex clique that exist.
But wait!
What is the definition of a "clique"?
A "clique" is a set of nodes in a graph
that are fully connected with one another
by edges between them.
Exercise: Simplest cliques
Given this definition, what is the simplest "clique" possible?
End of explanation
# I have truncated the output to the first 5 maximal cliques.
list(nx.find_cliques(G))[0:5]
Explanation: $k$-Cliques
Cliques are identified by their size $k$,
which is the number of nodes that are present in the clique.
A triangle is what we would consider to be a $k$-clique where $k=3$.
A square with cross-diagonal connections is what we would consider to be
a $k$-clique where $k=4$.
By now, you should get the gist of the idea.
Maximal Cliques
Related to this idea of a $k$-clique is another idea called "maximal cliques".
Maximal cliques are defined as follows:
A maximal clique is a subgraph of nodes in a graph
to which no other node can be added to it and
still remain a clique.
NetworkX provides a way to find all maximal cliques:
End of explanation
def size_k_maximal_cliques(G, k):
# Your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import size_k_maximal_cliques
Explanation: Exercise: finding sized-$k$ maximal cliques
Write a generator function that yields all maximal cliques of size $k$.
I'm requesting a generator as a matter of good practice;
you never know when the list you return might explode in memory consumption,
so generators are a cheap and easy way to reduce memory usage.
End of explanation
def test_size_k_maximal_cliques(G, k):
clique_generator = size_k_maximal_cliques(G, k)
for clique in clique_generator:
assert len(clique) == k
test_size_k_maximal_cliques(G, 5)
Explanation: Now, test your implementation against the test function below.
End of explanation
def find_k_cliques(G, k):
# your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import find_k_cliques
def test_find_k_cliques(G, k):
for clique in find_k_cliques(G, k):
assert len(clique) == k
test_find_k_cliques(G, 3)
Explanation: Clique Decomposition
One super neat property of cliques
is that every clique of size $k$
can be decomposed to the set of cliques of size $k-1$.
Does this make sense to you?
If not, think about triangles (3-cliques).
They can be decomposed to three edges (2-cliques).
Think again about 4-cliques.
Housed within 4-cliques are four 3-cliques.
Draw it out if you're still not convinced!
Exercise: finding all $k$-cliques in a graph
Knowing this property of $k$-cliques,
write a generator function that yields all $k$-cliques in a graph,
leveraging the nx.find_cliques(G) function.
Some hints to help you along:
If a $k$-clique can be decomposed to its $k-1$ cliques,
it follows that the $k-1$ cliques can be decomposed into $k-2$ cliques,
and so on until you hit 2-cliques.
This implies that all cliques of size $k$
house cliques of size $n < k$, where $n >= 2$.
End of explanation
import nxviz as nv
nv.circos(G)
Explanation: Connected Components
Now that we've explored a lot around cliques,
we're now going to explore this idea of "connected components".
To do so, I am going to have you draw the graph
that we are working with.
End of explanation
from nams.solutions.structures import visual_insights
# UNCOMMENT TO SEE MY ANSWER
# visual_insights()
Explanation: Exercise: Visual insights
From this rendering of the CircosPlot,
what visual insights do you have about the structure of the graph?
End of explanation
ccsubgraph_nodes = list(nx.connected_components(G))
Explanation: Defining connected components
From Wikipedia:
In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.
NetworkX provides a function to let us find all of the connected components:
End of explanation
len(ccsubgraph_nodes)
Explanation: Let's see how many connected component subgraphs are present:
End of explanation
def label_connected_component_subgraphs(G):
# Your answer here
return G
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import label_connected_component_subgraphs
G_labelled = label_connected_component_subgraphs(G)
# UNCOMMENT TO SEE THE ANSWER
# label_connected_component_subgraphs??
Explanation: Exercise: visualizing connected component subgraphs
In this exercise, we're going to draw a circos plot of the graph,
but colour and order the nodes by their connected component subgraph.
Recall Circos API:
python
c = CircosPlot(G, node_order='node_attribute', node_color='node_attribute')
c.draw()
plt.show() # or plt.savefig(...)
Follow the steps along here to accomplish this.
Firstly, label the nodes with a unique identifier for connected component subgraph
that it resides in.
Use subgraph to store this piece of metadata.
End of explanation
def plot_cc_subgraph(G):
# Your answer here
pass
# COMMENT OUT THE IMPORT LINE TO TEST YOUR ANSWER
from nams.solutions.structures import plot_cc_subgraph
from nxviz import annotate
plot_cc_subgraph(G_labelled)
annotate.circos_group(G_labelled, group_by="subgraph")
Explanation: Now, draw a CircosPlot with the node order and colouring
dictated by the subgraph key.
End of explanation
nv.arc(G_labelled, group_by="subgraph", node_color_by="subgraph")
annotate.arc_group(G_labelled, group_by="subgraph", rotation=0)
Explanation: Using an arc plot will also clearly illuminate for us
that there are no inter-group connections.
End of explanation
from nams.solutions import structures
import inspect
print(inspect.getsource(structures))
Explanation: Voila! It looks quite clear that there are indeed four disjoint group of physicians.
Solutions
End of explanation |
15,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import facility data and NERC labels
Step1: Read NERC shapefile and merge with geo_df
Step2: Merge NERC labels into the facility df
Step3: Filter out data older than 2014 to reduce size
Step4: Import state-level generation data
Step5: Total generation and fuel consumption for each fuel category
Annual
Step6: It's interesting that the facility data has fuel consumption for solar generation and the state data doesn't. Looking at a 923 data file, it's clear that the fuel consumption is just based on a conversion efficiency of 36.6% across all facilities.
Step7: How much generation from large sources (Hydro, wind, coal, natural gas, and nuclear) is missed by monthly 923 data?
Step8: 2015 generation and fuel consumption from annual vs monthly reporting plants
The goal here is to figure out how much of generation and fuel consumption from facilities that only report annually is in each NERC region (by state)
Step9: Make a dataframe with generation, fuel consumption, and reporting frequency of facilities in 2015
Step10: Number of NERC regions in a state
Step13: Fraction of generation/consumption from Annual reporting facilities in each NERC region of a state
This is development of a method that will be used to approximate the fraction of EIA-estimated generation and consumption within each state that gets apportioned to each NERC regions (when there is more than one). The idea is to take data from the most recent "final" EIA-923 and use the annual reporting facilities to approximate the divisions for more recent data. I still need to figure out if it's better to do the calculation by month within a year or just for the year as a whole.
Determining if it's better to do month-by-month vs a single value for the whole year will depend on if the share of generation/consumption from Annual reporting facilities in each NERC changes much over the course of the year. There is the potential for error either way, and maybe even differences by state. Annual is certainly simpler.
While looking at data for Texas, I've discovered that generation from Annual reporting facilities can be negative. Need to figure out how (if?) to deal with this...
Conclusion
While there can be variation of % generation in each NERC within a state over the course of 2015, most fuel categories across most states are quite stable. And when fuels do a have a wide spread over the year, they also tend to not be a large fraction of total generation within the NERC region. Given these observations, I'm going to stick with a split calculated as the average over an entire year.
Step14: This is the percent of generation, total fuel consumption, and electric fuel consumption from facilities that report annually to EIA-923
Step15: States that include more than one NERC region
Step16: The dataframe below shows all states with more than one NERC region where facility generation is at least 5% below EIA's state-level estimate in 2016. | Python Code:
path = os.path.join('Data storage', 'Facility gen fuels and CO2 2017-05-25.zip')
facility_df = pd.read_csv(path, parse_dates=['datetime'])
facility_df.head()
facility_df.dropna(inplace=True, subset=['lat', 'lon'])
cols = ['lat', 'lon', 'plant id', 'year']
small_facility = facility_df.loc[:, cols].drop_duplicates()
geometry = [Point(xy) for xy in zip(small_facility.lon, small_facility.lat)]
# small_facility = small_facility.drop(['lon', 'lat'], axis=1)
crs = {'init': 'epsg:4326'}
geo_df = GeoDataFrame(small_facility, crs=crs, geometry=geometry)
Explanation: Import facility data and NERC labels
End of explanation
path = os.path.join('Data storage', 'NERC_Regions_EIA', 'NercRegions_201610.shp')
regions = gpd.read_file(path)
facility_nerc = gpd.sjoin(geo_df, regions, how='inner', op='within')
facility_nerc.head()
Explanation: Read NERC shapefile and merge with geo_df
End of explanation
cols = ['plant id', 'year', 'NERC']
facility_df = facility_df.merge(facility_nerc.loc[:, cols],
on=['plant id', 'year'], how='left')
facility_df.head()
Explanation: Merge NERC labels into the facility df
End of explanation
facility_df['state'] = facility_df['geography'].str[-2:]
keep_cols = ['fuel', 'year', 'month', 'datetime', 'state', 'plant id', 'NERC',
'generation (MWh)', 'total fuel (mmbtu)', 'elec fuel (mmbtu)']
facility_df = facility_df.loc[facility_df['year'] >= 2014, keep_cols]
facility_fuel_cats = {'COW': ['SUB', 'BIT', 'LIG', 'WC', 'SC', 'RC', 'SGC'],
'NG': ['NG'],
'PEL': ['DFO', 'RFO', 'KER', 'JF',
'PG', 'WO', 'SGP'],
'PC': ['PC'],
'HYC': ['WAT'],
'HPS': [],
'GEO': ['GEO'],
'NUC': ['NUC'],
'OOG': ['BFG', 'OG', 'LFG'],
'OTH': ['OTH', 'MSN', 'MSW', 'PUR', 'TDF', 'WH'],
'SUN': ['SUN'],
'DPV': [],
'WAS': ['OBL', 'OBS', 'OBG', 'MSB', 'SLW'],
'WND': ['WND'],
'WWW': ['WDL', 'WDS', 'AB', 'BLQ']
}
for category in facility_fuel_cats.keys():
fuels = facility_fuel_cats[category]
facility_df.loc[facility_df['fuel'].isin(fuels),
'fuel category'] = category
facility_df.head()
facility_df.dtypes
facility_df.loc[facility_df['NERC'].isnull(), 'state'].unique()
Explanation: Filter out data older than 2014 to reduce size
End of explanation
folder = os.path.join('Data storage', 'Derived data', 'state gen data')
states = ["AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE",
"FL", "GA", "HI", "ID", "IL", "IN", "IA", "KS",
"KY", "LA", "ME", "MD", "MA", "MI", "MN", "MS",
"MO", "MT", "NE", "NV", "NH", "NJ", "NM", "NY",
"NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC",
"SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY"]
state_list = []
for state in states:
path = os.path.join(folder, '{} fuels gen.csv'.format(state))
df = pd.read_csv(path, parse_dates=['datetime'])
state_list.append(df)
state_df = pd.concat(state_list)
state_df.reset_index(inplace=True, drop=True)
state_df.dtypes
state_df['state'] = state_df['geography'].str[-2:]
keep_cols = ['state', 'type', 'year', 'datetime', 'generation (MWh)',
'elec fuel (mmbtu)']
fuel_cats = facility_fuel_cats.keys()
state_df = state_df.loc[(state_df['year'] >= 2014) &
(state_df['type'].isin(fuel_cats)), keep_cols]
state_df['type'].unique()
Explanation: Import state-level generation data
End of explanation
annual_facility = facility_df.groupby(['year', 'state', 'fuel category']).sum()
# annual_facility.reset_index(inplace=True)
annual_facility.drop('plant id', axis=1, inplace=True)
annual_facility.head()
annual_state = state_df.groupby(['year', 'state', 'type']).sum()
# annual_state.reset_index(inplace=True)
annual_state.head(n=25)
Explanation: Total generation and fuel consumption for each fuel category
Annual
End of explanation
annual_state.loc[2016, 'CA', 'SUN']
annual_facility.loc[2016, 'CA', 'SUN']
Explanation: It's interesting that the facility data has fuel consumption for solar generation and the state data doesn't. Looking at a 923 data file, it's clear that the fuel consumption is just based on a conversion efficiency of 36.6% across all facilities.
End of explanation
for fuel in ['HYC', 'WND', 'COW', 'NG', 'NUC', 'SUN']:
state_total = annual_state.loc[2016, :, fuel]['generation (MWh)'].sum()
facility_total = annual_facility.loc[2016, :, fuel]['generation (MWh)'].sum()
error = (state_total - facility_total) / state_total
print('{} has an error of {:.2f}%'.format(fuel, error * 100))
Explanation: How much generation from large sources (Hydro, wind, coal, natural gas, and nuclear) is missed by monthly 923 data?
End of explanation
path = os.path.join('Data storage', 'EIA923_Schedules_2_3_4_5_M_12_2015_Final.xlsx')
frequency = pd.read_excel(path, sheetname='Page 6 Plant Frame', header=4)
frequency.head()
frequency.rename(columns={'Plant Id': 'plant id',
'Plant State': 'state',
'YEAR': 'year',
'Reporting\nFrequency': 'Reporting Frequency'}, inplace=True)
frequency.head()
frequency.dtypes
Explanation: 2015 generation and fuel consumption from annual vs monthly reporting plants
The goal here is to figure out how much of generation and fuel consumption from facilities that only report annually is in each NERC region (by state)
End of explanation
freq_cols = ['year', 'plant id', 'Reporting Frequency']
df = pd.merge(facility_df, frequency.loc[:, freq_cols], on=['year', 'plant id'])
df.head()
g = sns.factorplot(x='fuel category', y='generation (MWh)', hue='Reporting Frequency',
col='NERC', col_wrap=3, data=df, estimator=np.sum, ci=0, kind='bar',
palette='tab10')
g.set_xticklabels(rotation=30)
Explanation: Make a dataframe with generation, fuel consumption, and reporting frequency of facilities in 2015
End of explanation
df.loc[df['state'] == 'TX', 'NERC'].nunique()
Explanation: Number of NERC regions in a state
End of explanation
def annual(df, state):
Return the percent of gen & consumption by fuel type in each NERC region
for a state
a = df.loc[(df.state == state) &
(df['Reporting Frequency'] == 'A')].copy()
a.drop(['plant id', 'year'], axis=1, inplace=True)
a = a.groupby(['NERC', 'fuel category']).sum()
fuels = set(a.index.get_level_values('fuel category'))
temp_list = []
for fuel in fuels:
temp = (a.xs(fuel, level='fuel category')
/ a.xs(fuel, level='fuel category').sum())
temp['fuel category'] = fuel
temp_list.append(temp)
result = pd.concat(temp_list)
result.reset_index(inplace=True)
result['state'] = state
rename_cols = {'generation (MWh)': '% generation',
'total fuel (mmbtu)': '% total fuel',
'elec fuel (mmbtu)': '% elec fuel'}
result.rename(columns=rename_cols, inplace=True)
return result
def annual_month(df, state):
Return the percent of gen & consumption by fuel type and month in each
NERC region for a state
a = df.loc[(df.state == state) &
(df['Reporting Frequency'] == 'A')].copy()
a.drop(['plant id', 'year'], axis=1, inplace=True)
a = a.groupby(['NERC', 'fuel category', 'month']).sum()
fuels = set(a.index.get_level_values('fuel category'))
temp_list = []
for fuel in fuels:
for month in range(1, 13):
temp = (a.xs(fuel, level='fuel category')
.xs(month, level='month')
/ a.xs(fuel, level='fuel category')
.xs(month, level='month')
.sum())
temp['fuel category'] = fuel
temp['month'] = month
temp_list.append(temp)
result = pd.concat(temp_list)
result.reset_index(inplace=True)
result['state'] = state
rename_cols = {'generation (MWh)': '% generation',
'total fuel (mmbtu)': '% total fuel',
'elec fuel (mmbtu)': '% elec fuel'}
result.rename(columns=rename_cols, inplace=True)
return result
Explanation: Fraction of generation/consumption from Annual reporting facilities in each NERC region of a state
This is development of a method that will be used to approximate the fraction of EIA-estimated generation and consumption within each state that gets apportioned to each NERC regions (when there is more than one). The idea is to take data from the most recent "final" EIA-923 and use the annual reporting facilities to approximate the divisions for more recent data. I still need to figure out if it's better to do the calculation by month within a year or just for the year as a whole.
Determining if it's better to do month-by-month vs a single value for the whole year will depend on if the share of generation/consumption from Annual reporting facilities in each NERC changes much over the course of the year. There is the potential for error either way, and maybe even differences by state. Annual is certainly simpler.
While looking at data for Texas, I've discovered that generation from Annual reporting facilities can be negative. Need to figure out how (if?) to deal with this...
Conclusion
While there can be variation of % generation in each NERC within a state over the course of 2015, most fuel categories across most states are quite stable. And when fuels do a have a wide spread over the year, they also tend to not be a large fraction of total generation within the NERC region. Given these observations, I'm going to stick with a split calculated as the average over an entire year.
End of explanation
df_list = []
for state in states:
num_nerc = df.loc[df.state == state, 'NERC'].nunique()
if num_nerc > 1:
df_list.append(annual(df, state))
df_list = []
for state in states:
num_nerc = df.loc[df.state == state, 'NERC'].nunique()
if num_nerc > 1:
df_list.append(annual_month(df, state))
fuel_by_nerc_month = pd.concat(df_list).reset_index(drop=True)
fuel_by_nerc = pd.concat(df_list).reset_index(drop=True)
fuel_by_nerc.head()
fuel_by_nerc_month.tail()
st
split_states = []
for state in states:
if df.loc[df.state == state, 'NERC'].nunique() > 1:
split_states.append(state)
split_states
cols = ['state', 'NERC', 'fuel category']
a = fuel_by_nerc_month.groupby(cols).std()
a.drop('month', axis=1, inplace=True)
a.xs('AR', level='state')
a[a > .1].dropna(how='all')
fuels = ['NG', 'HYC', 'COW', 'GEO', 'WND', 'SUN']
sns.factorplot(x='month', y='% generation', hue='fuel category', col='NERC',
row='state',
data=fuel_by_nerc_month.loc[(fuel_by_nerc_month['fuel category'].isin(fuels)) &
(fuel_by_nerc_month['NERC'] != '-')],
n_boot=1)
path = os.path.join('Figures', 'SI', 'Annual facility seasonal gen variation.pdf')
# plt.savefig(path, bbox_inches='tight')
fuel_by_nerc_month.loc[(fuel_by_nerc_month.state=='TX') &
(fuel_by_nerc_month['fuel category'] == 'WWW')]
df.loc[(df.state == 'TX') &
(df['fuel category'] == 'WWW') &
(df['Reporting Frequency'] == 'A')].groupby(['NERC', 'month', 'fuel category']).sum()
df.loc[(df.state == 'TX') &
(df['fuel category'] == 'WWW') &
(df['Reporting Frequency'] == 'A')].groupby(['NERC', 'fuel category']).sum()
Explanation: This is the percent of generation, total fuel consumption, and electric fuel consumption from facilities that report annually to EIA-923
End of explanation
NERC_states = ['WY', 'SD', 'NE', 'OK', 'TX', 'NM', 'LA', 'AR',
'MO', 'MN', 'IL', 'KY', 'VA', 'FL']
error_list = []
for state in NERC_states:
error = (annual_state.loc[2016, state]
- annual_facility.loc[2016, state]) / annual_state.loc[2016, state]
error['state'] = state
for col in ['generation (MWh)']:#, 'elec fuel (mmbtu)']:
if error.loc[error[col] > 0.05, col].any():
error_list.append(error.loc[error[col] > 0.05])
Explanation: States that include more than one NERC region
End of explanation
pd.concat(error_list)
Explanation: The dataframe below shows all states with more than one NERC region where facility generation is at least 5% below EIA's state-level estimate in 2016.
End of explanation |
15,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How Bias Enters a Model
This notebook is a simple demonstration of how bias with respect an attribute can get encoded into a model, even if the labels are perfectly accurate and the model is unaware of the attribute.
Download notebook file
View notebook on github
Step1: Generate Data
Let's generate some data with a simple model. There's a binary sensitive attribute $A$, predictor1 $p_1$ uncorrelated with the attribute, predictor2 $p_2$ correlated with the attribute, random noise $\epsilon$, and a binary label $y$ that's correlated with both predictors.
$$P(y=1) = \mathrm{Logistic}(p_1 + p_2 + \epsilon)$$
where
$$p1 \sim \mathrm{Normal}(0,1)$$
$$p2 \sim \mathrm{Normal}(0,1) \iff A=a$$
$$p2 \sim \mathrm{Normal}(1,1) \iff A=b$$
$$\epsilon \sim \mathrm{Normal}(0,1)$$
Step2: Above you can see that the probability of (actually) being in the positive class is correlated with the attribute and with both of the predictors.
Step3: Fit a Model
We have our dataset now, with two predictors and a binary outcome. Let's fit a logistic regression model to it.
Step4: Now generate predictions from the model.
Step6: Amplification of bias
There was an initial disparity between the attribute classes in terms of their actual labels
Step7: Model Performance
ROC Curve
Let's plot ROC curves for the whole dataset and for each attribute group to compare model performance
Step8: From the ROC curves, it looks like the model performs about equally well for all groups.
True Positive Rate
Let's check the true positive rate $P(\hat{y}=1 \vert y=1)$ vs the score threshold. We find that the true positive rate is better at all thresholds for the attribute $A=b$ group
Step9: So it looks like the model will actually perform better in terms of TPR (aka recall) for group $A=b$.
Now let's check the false positive rate $P(\hat{y}=0 \vert y=1)$ vs the score threshold
Step11: We find that the false positive rate is much higher at all thresholds for the $A=b$ group. If the negative class is preferred (e.g. in a model predicting fraud, spam, defaulting on a loan, etc.), that means we're much more likely to falsely classify an actually-good member of group $b$ as bad, compared to a actually-good member of group $a$. | Python Code:
%matplotlib inline
from IPython.display import display, Markdown, Latex
import numpy as np
import sklearn
import matplotlib
import matplotlib.pylab as plt
import sklearn.linear_model
import seaborn
import scipy.special
seaborn.set(rc={"figure.figsize": (8, 6)}, font_scale=1.5)
Explanation: How Bias Enters a Model
This notebook is a simple demonstration of how bias with respect an attribute can get encoded into a model, even if the labels are perfectly accurate and the model is unaware of the attribute.
Download notebook file
View notebook on github
End of explanation
n = 10000 # Sample size.
# Create an attribute.
attribute = np.choose(np.random.rand(n) > 0.5, ['a', 'b'])
# Create an uncorrelated predictor.
predictor1 = np.random.randn(n)
# Create a predictor correlated with the attribute.
disparity_scale = 1.0
predictor2 = np.random.randn(n) + ((attribute == 'b') * disparity_scale)
# Generate random noise.
noise_scale = 1.0
noise = np.random.randn(n) * noise_scale
# Calculate the probability of the binary label.
scale = 1.0
p_outcome = scipy.special.expit(scale * (predictor1 + predictor2 + noise))
# Calculate the outcome.
y = p_outcome > np.random.rand(n)
seaborn.set(rc={"figure.figsize": (8, 6)}, font_scale=1.5)
c0 = seaborn.color_palette()[0]
c1 = seaborn.color_palette()[2]
plt.figure(figsize=(8,6))
plt.bar([0], [y[attribute == 'a'].mean()], fc=c0)
plt.bar([1], [y[attribute == 'b'].mean()], fc=c1)
plt.xticks([0.0,1.0], ['$A=a$', '$A=b$'])
plt.xlabel("Attribute Value")
plt.ylabel("p(label=True)")
plt.ylim(0,1);
plt.figure()
plt.scatter(predictor1[attribute == 'b'], p_outcome[attribute == 'b'], c=c1, alpha=0.25, label='A=b')
plt.scatter(predictor1[attribute == 'a'], p_outcome[attribute == 'a'], c=c0, alpha=0.25, label='A=a')
plt.xlabel('predictor1')
plt.ylabel('p(label=True)')
plt.legend(loc='best')
plt.figure()
plt.scatter(predictor2[attribute == 'b'], p_outcome[attribute == 'b'], c=c1, alpha=0.25, label='A=b')
plt.scatter(predictor2[attribute == 'a'], p_outcome[attribute == 'a'], c=c0, alpha=0.25, label='A=a')
plt.xlabel('predictor2')
plt.ylabel('p(label=True)')
plt.legend(loc='best');
Explanation: Generate Data
Let's generate some data with a simple model. There's a binary sensitive attribute $A$, predictor1 $p_1$ uncorrelated with the attribute, predictor2 $p_2$ correlated with the attribute, random noise $\epsilon$, and a binary label $y$ that's correlated with both predictors.
$$P(y=1) = \mathrm{Logistic}(p_1 + p_2 + \epsilon)$$
where
$$p1 \sim \mathrm{Normal}(0,1)$$
$$p2 \sim \mathrm{Normal}(0,1) \iff A=a$$
$$p2 \sim \mathrm{Normal}(1,1) \iff A=b$$
$$\epsilon \sim \mathrm{Normal}(0,1)$$
End of explanation
display(Markdown("Condition Positive Fraction for each attribute class: (a,b): {:.3f} {:.3f}".format(
y[attribute == 'a'].mean(), y[attribute == 'b'].mean())))
display(Markdown(
"Members of the $A=b$ group are {:.0f}% more likely to have the positive label than $a$ group.".format(
100.0 * (y[attribute == 'b'].mean()/y[attribute == 'a'].mean() - 1.0))))
Explanation: Above you can see that the probability of (actually) being in the positive class is correlated with the attribute and with both of the predictors.
End of explanation
# Put the predictors into the expected sklearn format.
X = np.vstack([predictor1, predictor2]).T
# Initialize our logistic regression classifier.
clf = sklearn.linear_model.LogisticRegression()
# Perform the fit.
clf.fit(X, y)
# Model fit parameters:
clf.intercept_, clf.coef_
Explanation: Fit a Model
We have our dataset now, with two predictors and a binary outcome. Let's fit a logistic regression model to it.
End of explanation
p = clf.predict_proba(X)[:,1]
yhat = p > 0.5
Explanation: Now generate predictions from the model.
End of explanation
plt.figure(figsize=(8,6))
plt.bar([0, 2], [y[attribute == 'a'].mean(), y[attribute == 'b'].mean()],
fc=c0, label='Actual Labels')
plt.bar([1, 3], [yhat[attribute == 'a'].mean(), yhat[attribute == 'b'].mean()],
fc=c1, label='Predicted Labels')
plt.xticks([0.0, 1.0, 2.0, 3.0], ['$A=a$ \n actual', '$A=a$ \n pred', '$A=b$ \n actual', '$A=b$ \n pred'])
plt.ylabel("Positive Label Fraction")
plt.ylim(0,1)
plt.legend(loc='best');
display(Markdown("Condition Positive fraction for each attribute class: (a,b): {:.0f}% {:.0f}%".format(
100.0 * y[attribute == 'a'].mean(), 100.0 * y[attribute == 'b'].mean())))
display(Markdown("Predicted Positive fraction for each attribute class: (a,b): {:.0f}% {:.0f}%".format(
100.0 * yhat[attribute == 'a'].mean(), 100.0 * yhat[attribute == 'b'].mean())))
display(Markdown(
So the initial {:.0f}% disparity in the _actual_ labels is amplified by the model.
**Members of the $A=b$ group are {:.0f}% more likely to have the positive _predicted_ label than $b$ group.**
The model has amplified the initial disparity by a factor of {:.2f}.
.format(
100.0 * (y[attribute == 'b'].mean()/y[attribute == 'a'].mean() - 1.0),
100.0 * (yhat[attribute == 'b'].mean()/yhat[attribute == 'a'].mean() - 1.0),
(yhat[attribute == 'b'].mean()/yhat[attribute == 'a'].mean()) /
(y[attribute == 'b'].mean()/y[attribute == 'a'].mean())
)))
Explanation: Amplification of bias
There was an initial disparity between the attribute classes in terms of their actual labels: the $A=b$ group was much more likely to get the positive label than the $A=a$ group. Now let's see how that disparity is reflected in the predicted labels.
End of explanation
fpr_all, tpr_all, t_all = sklearn.metrics.roc_curve(y, p)
fpr_falseclass, tpr_falseclass, t_falseclass = sklearn.metrics.roc_curve(y[attribute == 'a'], p[attribute == 'a'])
fpr_trueclass, tpr_trueclass, t_trueclass = sklearn.metrics.roc_curve(y[attribute == 'b'], p[attribute == 'b'])
plt.plot(fpr_falseclass, tpr_falseclass, label='attribute=a', alpha=0.5, lw=3)
plt.plot(fpr_all, tpr_all, label='all', alpha=0.5, lw=3)
plt.plot(fpr_trueclass, tpr_trueclass, label='attribute=b', alpha=0.5, lw=3)
plt.legend(loc='best')
plt.xlabel("FPR")
plt.ylabel("TRP");
Explanation: Model Performance
ROC Curve
Let's plot ROC curves for the whole dataset and for each attribute group to compare model performance:
End of explanation
plt.plot(t_falseclass, tpr_falseclass, label='attribute=a', lw=3)
plt.plot(t_all, tpr_all, label='all', lw=3)
plt.plot(t_trueclass, tpr_trueclass, label='attribute=b', lw=3)
plt.legend(loc='best')
plt.xlabel("threshold")
plt.ylabel("TPR");
Explanation: From the ROC curves, it looks like the model performs about equally well for all groups.
True Positive Rate
Let's check the true positive rate $P(\hat{y}=1 \vert y=1)$ vs the score threshold. We find that the true positive rate is better at all thresholds for the attribute $A=b$ group:
End of explanation
plt.plot(t_falseclass, fpr_falseclass, label='attribute=a', lw=3)
plt.plot(t_all, fpr_all, label='all', lw=3)
plt.plot(t_trueclass, fpr_trueclass, label='attribute=b', lw=3)
plt.legend(loc='best')
plt.xlabel("threshold")
plt.ylabel("FPR");
Explanation: So it looks like the model will actually perform better in terms of TPR (aka recall) for group $A=b$.
Now let's check the false positive rate $P(\hat{y}=0 \vert y=1)$ vs the score threshold:
End of explanation
def fpr(y_true, y_pred):
fp = (np.logical_not(y_true) & y_pred).sum()
cn = np.logical_not(y_true).sum()
return 100.0 * fp * 1.0 / cn
display(Markdown(
At a threshold of model score $= 0.5$, the false positive rate is {:.0f}% overall, **{:.0f}% for group $a$,
and {:.0f}% for group $b$**.
.format(
fpr(y, yhat), fpr(y[attribute=='a'], yhat[attribute=='a']), fpr(y[attribute=='b'], yhat[attribute=='b']))))
Explanation: We find that the false positive rate is much higher at all thresholds for the $A=b$ group. If the negative class is preferred (e.g. in a model predicting fraud, spam, defaulting on a loan, etc.), that means we're much more likely to falsely classify an actually-good member of group $b$ as bad, compared to a actually-good member of group $a$.
End of explanation |
15,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MNE inverse solution on evoked data with a mixed source space
Create a mixed source space and compute an MNE inverse solution on an
evoked dataset.
Step1: Set up our source space
List substructures we are interested in. We select only the
sub structures we want to include in the source space
Step2: Get a surface-based source space, here with few source points for speed
in this demonstration, in general you should use oct6 spacing!
Step3: Now we create a mixed src space by adding the volume regions specified in the
list labels_vol. First, read the aseg file and the source space bounds
using the inner skull surface (here using 10mm spacing to save time,
we recommend something smaller like 5.0 in actual analyses)
Step4: View the source space
Step5: We could write the mixed source space with
Step6: Compute the fwd matrix
Step7: Compute inverse solution
Step8: Plot the mixed source estimate
Step9: Plot the surface
Step10: Plot the volume
Step11: Process labels
Average the source estimates within each label of the cortical parcellation
and each sub structure contained in the src space | Python Code:
# Author: Annalisa Pascarella <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import matplotlib.pyplot as plt
from nilearn import plotting
import mne
from mne.minimum_norm import make_inverse_operator, apply_inverse
# Set dir
data_path = mne.datasets.sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
# Set file names
fname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)
fname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')
fname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)
fname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
fname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
Explanation: Compute MNE inverse solution on evoked data with a mixed source space
Create a mixed source space and compute an MNE inverse solution on an
evoked dataset.
End of explanation
labels_vol = ['Left-Amygdala',
'Left-Thalamus-Proper',
'Left-Cerebellum-Cortex',
'Brain-Stem',
'Right-Amygdala',
'Right-Thalamus-Proper',
'Right-Cerebellum-Cortex']
Explanation: Set up our source space
List substructures we are interested in. We select only the
sub structures we want to include in the source space:
End of explanation
src = mne.setup_source_space(subject, spacing='oct5',
add_dist=False, subjects_dir=subjects_dir)
Explanation: Get a surface-based source space, here with few source points for speed
in this demonstration, in general you should use oct6 spacing!
End of explanation
vol_src = mne.setup_volume_source_space(
subject, mri=fname_aseg, pos=10.0, bem=fname_model,
volume_label=labels_vol, subjects_dir=subjects_dir,
add_interpolator=False, # just for speed, usually this should be True
verbose=True)
# Generate the mixed source space
src += vol_src
print(f"The source space contains {len(src)} spaces and "
f"{sum(s['nuse'] for s in src)} vertices")
Explanation: Now we create a mixed src space by adding the volume regions specified in the
list labels_vol. First, read the aseg file and the source space bounds
using the inner skull surface (here using 10mm spacing to save time,
we recommend something smaller like 5.0 in actual analyses):
End of explanation
src.plot(subjects_dir=subjects_dir)
Explanation: View the source space
End of explanation
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)
src.export_volume(nii_fname, mri_resolution=True, overwrite=True)
plotting.plot_img(nii_fname, cmap='nipy_spectral')
Explanation: We could write the mixed source space with::
write_source_spaces(fname_mixed_src, src, overwrite=True)
We can also export source positions to NIfTI file and visualize it again:
End of explanation
fwd = mne.make_forward_solution(
fname_evoked, fname_trans, src, fname_bem,
mindist=5.0, # ignore sources<=5mm from innerskull
meg=True, eeg=False, n_jobs=1)
del src # save memory
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
print(f"The fwd source space contains {len(fwd['src'])} spaces and "
f"{sum(s['nuse'] for s in fwd['src'])} vertices")
# Load data
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
Explanation: Compute the fwd matrix
End of explanation
snr = 3.0 # use smaller SNR for raw data
inv_method = 'dSPM' # sLORETA, MNE, dSPM
parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'
loose = dict(surface=0.2, volume=1.)
lambda2 = 1.0 / snr ** 2
inverse_operator = make_inverse_operator(
evoked.info, fwd, noise_cov, depth=None, loose=loose, verbose=True)
del fwd
stc = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori=None)
src = inverse_operator['src']
Explanation: Compute inverse solution
End of explanation
initial_time = 0.1
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori='vector')
brain = stc_vec.plot(
hemi='both', src=inverse_operator['src'], views='coronal',
initial_time=initial_time, subjects_dir=subjects_dir,
brain_kwargs=dict(silhouette=True))
Explanation: Plot the mixed source estimate
End of explanation
brain = stc.surface().plot(initial_time=initial_time,
subjects_dir=subjects_dir)
Explanation: Plot the surface
End of explanation
fig = stc.volume().plot(initial_time=initial_time, src=src,
subjects_dir=subjects_dir)
Explanation: Plot the volume
End of explanation
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels_parc = mne.read_labels_from_annot(
subject, parc=parc, subjects_dir=subjects_dir)
label_ts = mne.extract_label_time_course(
[stc], labels_parc, src, mode='mean', allow_empty=True)
# plot the times series of 2 labels
fig, axes = plt.subplots(1)
axes.plot(1e3 * stc.times, label_ts[0][0, :], 'k', label='bankssts-lh')
axes.plot(1e3 * stc.times, label_ts[0][-1, :].T, 'r', label='Brain-stem')
axes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')
axes.legend()
mne.viz.tight_layout()
Explanation: Process labels
Average the source estimates within each label of the cortical parcellation
and each sub structure contained in the src space
End of explanation |
15,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content-based recommender using Deep Structured Semantic Model
An example of how to build a Deep Structured Semantic Model (DSSM) for incorporating complex content-based features into a recommender system. See Learning Deep Structured Semantic Models for Web Search using Clickthrough Data. This example does not attempt to provide a datasource or train a model, but merely show how to structure a complex DSSM network.
Step3: Bag of words random projection
A previous version of this example contained a bag of word random projection example, it is kept here for reference but not used in the next example.
Random Projection is a dimension reduction technique that guarantees the disruption of the pair-wise distance between your original data point within a certain bound.
What is even more interesting is that the dimension to project onto to guarantee that bound does not depend on the original number of dimension but solely on the total number of datapoints.
You can see more explanation in this blog post
Step4: With padding
Step5: Content-based recommender / ranking system using DSSM
For example in the search result ranking problem
Step6: It is quite hard to visualize the network since it is relatively complex but you can see the two-pronged structure, and the resnet18 branch
Step7: We can print the summary of the network using dummy data. We can see it is already training on 32M parameters! | Python Code:
import warnings
import mxnet as mx
from mxnet import gluon, nd, autograd, sym
import numpy as np
from sklearn.random_projection import johnson_lindenstrauss_min_dim
# Define some constants
max_user = int(1e5)
title_vocab_size = int(3e4)
query_vocab_size = int(3e4)
num_samples = int(1e4)
hidden_units = 128
epsilon_proj = 0.25
ctx = mx.gpu() if len(mx.test_utils.list_gpus()) > 0 else mx.cpu()
Explanation: Content-based recommender using Deep Structured Semantic Model
An example of how to build a Deep Structured Semantic Model (DSSM) for incorporating complex content-based features into a recommender system. See Learning Deep Structured Semantic Models for Web Search using Clickthrough Data. This example does not attempt to provide a datasource or train a model, but merely show how to structure a complex DSSM network.
End of explanation
proj_dim = johnson_lindenstrauss_min_dim(num_samples, epsilon_proj)
print("To keep a distance disruption ~< {}% of our {} samples we need to randomly project to at least {} dimensions".format(epsilon_proj*100, num_samples, proj_dim))
class BagOfWordsRandomProjection(gluon.HybridBlock):
def __init__(self, vocab_size, output_dim, random_seed=54321, pad_index=0):
:param int vocab_size: number of element in the vocabulary
:param int output_dim: projection dimension
:param int ramdon_seed: seed to use to guarantee same projection
:param int pad_index: index of the vocabulary used for padding sentences
super(BagOfWordsRandomProjection, self).__init__()
self._vocab_size = vocab_size
self._output_dim = output_dim
proj = self._random_unit_vecs(vocab_size=vocab_size, output_dim=output_dim, random_seed=random_seed)
# we set the projection of the padding word to 0
proj[pad_index, :] = 0
self.proj = self.params.get_constant('proj', value=proj)
def _random_unit_vecs(self, vocab_size, output_dim, random_seed):
rs = np.random.RandomState(seed=random_seed)
W = rs.normal(size=(vocab_size, output_dim))
Wlen = np.linalg.norm(W, axis=1)
W_unit = W / Wlen[:,None]
return W_unit
def hybrid_forward(self, F, x, proj):
:param nd or sym F:
:param nd.NDArray x: index of tokens
returns the sum of the projected embeddings of each token
embedded = F.Embedding(x, proj, input_dim=self._vocab_size, output_dim=self._output_dim)
return embedded.sum(axis=1)
bowrp = BagOfWordsRandomProjection(1000, 20)
bowrp.initialize()
bowrp(mx.nd.array([[10, 50, 100], [5, 10, 0]]))
Explanation: Bag of words random projection
A previous version of this example contained a bag of word random projection example, it is kept here for reference but not used in the next example.
Random Projection is a dimension reduction technique that guarantees the disruption of the pair-wise distance between your original data point within a certain bound.
What is even more interesting is that the dimension to project onto to guarantee that bound does not depend on the original number of dimension but solely on the total number of datapoints.
You can see more explanation in this blog post
End of explanation
bowrp(mx.nd.array([[10, 50, 100, 0], [5, 10, 0, 0]]))
Explanation: With padding:
End of explanation
proj_dim = 128
class DSSMRecommenderNetwork(gluon.HybridBlock):
def __init__(self, query_vocab_size, proj_dim, max_user, title_vocab_size, hidden_units, random_seed=54321, p=0.5):
super(DSSMRecommenderNetwork, self).__init__()
with self.name_scope():
# User/Query pipeline
self.user_embedding = gluon.nn.Embedding(max_user, proj_dim)
self.user_mlp = gluon.nn.Dense(hidden_units, activation="relu")
# Instead of bag of words, we use learned embeddings + stacked biLSTM average
self.query_text_embedding = gluon.nn.Embedding(query_vocab_size, proj_dim)
self.query_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True)
self.query_text_mlp = gluon.nn.Dense(hidden_units, activation="relu")
self.query_dropout = gluon.nn.Dropout(p)
self.query_mlp = gluon.nn.Dense(hidden_units, activation="relu")
# Item pipeline
# Instead of bag of words, we use learned embeddings + stacked biLSTM average
self.title_embedding = gluon.nn.Embedding(title_vocab_size, proj_dim)
self.title_lstm = gluon.rnn.LSTM(hidden_units, 2, bidirectional=True)
self.title_mlp = gluon.nn.Dense(hidden_units, activation="relu")
# You could use vgg here for example
self.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=False).features
self.image_mlp = gluon.nn.Dense(hidden_units, activation="relu")
self.item_dropout = gluon.nn.Dropout(p)
self.item_mlp = gluon.nn.Dense(hidden_units, activation="relu")
def hybrid_forward(self, F, user, query_text, title, image):
# Query
user = self.user_embedding(user)
user = self.user_mlp(user)
query_text = self.query_text_embedding(query_text)
query_text = self.query_lstm(query_text.transpose((1,0,2)))
# average the states
query_text = query_text.mean(axis=0)
query_text = self.query_text_mlp(query_text)
query = F.concat(user, query_text)
query = self.query_dropout(query)
query = self.query_mlp(query)
# Item
title_text = self.title_embedding(title)
title_text = self.title_lstm(title_text.transpose((1,0,2)))
# average the states
title_text = title_text.mean(axis=0)
title_text = self.title_mlp(title_text)
image = self.image_embedding(image)
image = self.image_mlp(image)
item = F.concat(title_text, image)
item = self.item_dropout(item)
item = self.item_mlp(item)
# Cosine Similarity
query = query.expand_dims(axis=2)
item = item.expand_dims(axis=2)
sim = F.batch_dot(query, item, transpose_a=True) / (query.norm(axis=1) * item.norm(axis=1) + 1e-9).expand_dims(axis=2)
return sim.squeeze(axis=2)
network = DSSMRecommenderNetwork(
query_vocab_size,
proj_dim,
max_user,
title_vocab_size,
hidden_units
)
network.initialize(mx.init.Xavier(), ctx)
# Load pre-trained vgg16 weights
with network.name_scope():
network.image_embedding = gluon.model_zoo.vision.resnet18_v2(pretrained=True, ctx=ctx).features
Explanation: Content-based recommender / ranking system using DSSM
For example in the search result ranking problem:
You have users, that have performed text-based searches. They were presented with results, and selected one of them.
Results are composed of a title and an image.
Your positive examples will be the clicked items in the search results, and the negative examples are sampled from the non-clicked examples.
The network will jointly learn embeddings for users and query text making up the "Query", title and image making the "Item" and learn how similar they are.
After training, you can index the embeddings for your items and do a knn search with your query embeddings using the cosine similarity to return ranked items
End of explanation
mx.viz.plot_network(network(
mx.sym.var('user'), mx.sym.var('query_text'), mx.sym.var('title'), mx.sym.var('image')),
shape={'user': (1,1), 'query_text': (1,30), 'title': (1,30), 'image': (1,3,224,224)},
node_attrs={"fixedsize":"False"})
Explanation: It is quite hard to visualize the network since it is relatively complex but you can see the two-pronged structure, and the resnet18 branch
End of explanation
user = mx.nd.array([[200], [100]], ctx)
query = mx.nd.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text
title = mx.nd.array([[10, 20, 0, 0, 0], [40, 50, 0, 0, 0]], ctx) # Example of an encoded text
image = mx.nd.random.uniform(shape=(2,3, 224,224), ctx=ctx) # Example of an encoded image
network.summary(user, query, title, image)
network(user, query, title, image)
Explanation: We can print the summary of the network using dummy data. We can see it is already training on 32M parameters!
End of explanation |
15,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solución de ecuaciones diferenciales
Dada la siguiente ecuación diferencial
Step1: Ejercicio
Crea codigo para una iteración mas con estos mismos parametros y despliega el resultado.
Step2: Momento... que esta pasando? Resulta que este $\Delta t$ es demasiado grande, intentemos con 20 iteraciones
Step3: Esto va a ser tardado, mejor digamosle a Python que es lo que tenemos que hacer, y que no nos moleste hasta que acabe, podemos usar un ciclo for y una lista para guardar todos los valores de la trayectoria
Step4: Ahora que tenemos estos valores, podemos graficar el comportamiento de este sistema, primero importamos la libreria matplotlib
Step5: Mandamos a llamar la función plot
Step6: Sin embargo debido a que el periodo de integración que utilizamos es demasiado grande, la solución es bastante inexacta, podemos verlo al graficar contra la que sabemos es la solución de nuestro problema
Step7: Si ahora utilizamos un numero de pedazos muy grande, podemos mejorar nuestra aproximación
Step8: odeint
Este método funciona tan bien, que ya viene programado dentro de la libreria scipy, por lo que solo tenemos que importar esta librería para utilizar este método.
Sin embargo debemos de tener cuidado al declarar la función $F(x, t)$. El primer argumento de la función se debe de referir al estado de la función, es decir $x$, y el segundo debe de ser la variable independiente, en nuestro caso el tiempo.
Step9: Ejercicio
Grafica el comportamiento de la siguiente ecuación diferencial.
$$
\dot{x} = x^2 - 5 x + \frac{1}{2} \sin{x} - 2
$$
Nota
Step10: Sympy
Y por ultimo, hay veces en las que incluso podemos obtener una solución analítica de una ecuación diferencial, siempre y cuando cumpla ciertas condiciones de simplicidad.
Step11: Ejercicio
Implementa el codigo necesario para obtener la solución analítica de la siguiente ecuación diferencial
Step12: Solución a ecuaciones diferenciales de orden superior
Si ahora queremos obtener el comportamiento de una ecuacion diferencial de orden superior, como
Step13: Ejercicio
Implementa la solución de la siguiente ecuación diferencial, por medio de un modelo en representación de espacio de estados
Step14: Funciones de transferencia
Sin embargo, no es la manera mas facil de obtener la solución, tambien podemos aplicar una transformada de Laplace, y aplicar las funciones de la libreria de control para simular la función de transferencia de esta ecuación; al aplicar la transformada de Laplace, obtendremos
Step15: Ejercicio
Modela matematicamente la ecuación diferencial del ejercicio anterior, usando una representación de función de transferencia.
Nota | Python Code:
x0 = 1
Δt = 1
# Para escribir simbolos griegos como Δ, tan solo tienes que escribir su nombre
# precedido de una diagonal (\Delta) y teclear tabulador una vez
F = lambda x : -x
x1 = x0 + F(x0)*Δt
x1
x2 = x1 + F(x1)*Δt
x2
Explanation: Solución de ecuaciones diferenciales
Dada la siguiente ecuación diferencial:
$$
\dot{x} = -x
$$
queremos obtener la respuesta del sistema que representa, es decir, los valores que va tomando $x$.
Si analizamos esta ecuación diferencial, podremos notar que la solución de este sistema es una función $\varphi(t)$, tal que cuando la derivemos obtengamos el negativo de esta misma función, es decir:
$$
\frac{d}{dt} \varphi(t) = -\varphi(t)
$$
y despues de un poco de pensar, podemos darnos cuenta de que la función que queremos es:
$$
\varphi(t) = e^{-t}
$$
Sin embargo muchas veces no tendremos funciones tan sencillas (ciertamente no es el caso en la robótica, donde usualmente tenemos ecuaciones diferenciales no lineales de orden $n$), por lo que en esta práctica veremos algunas estrategias para obtener soluciones a esta ecuación diferencial, tanto numéricas como simbolicas.
Método de Euler
El método de Euler para obtener el comportamiento de una ecuación diferencial, se basa en la intuición básica de la derivada; digamos que tenemos una ecuación diferencial general:
$$
\frac{dy}{dx} = y' = F(x, y)
$$
en donde $F(x, y)$ puede ser cualquier función que depende de $x$ y/o de $y$, entonces podemos dividir en pedazos el comportamiento de la gráfica de tal manera que solo calculemos un pequeño pedazo cada vez, aproximando el comportamiento de la ecuación diferencial, con el de una recta, cuya pendiente será la derivada:
«<a href="http://commons.wikimedia.org/wiki/File:M%C3%A9todo_de_Euler.jpg#/media/File:M%C3%A9todo_de_Euler.jpg">Método de Euler</a>» por <a href="//commons.wikimedia.org/w/index.php?title=User:Vero.delgado&action=edit&redlink=1" class="new" title="User:Vero.delgado (la página no existe)">Vero.delgado</a> - <span class="int-own-work" lang="es">Trabajo propio</span>. Disponible bajo la licencia <a href="http://creativecommons.org/licenses/by-sa/3.0" title="Creative Commons Attribution-Share Alike 3.0">CC BY-SA 3.0</a> vía <a href="//commons.wikimedia.org/wiki/">Wikimedia Commons</a>.
Esta recta que aproxima a la ecuación diferencial, podemos recordar que tiene una estructura:
$$
y = b + mx
$$
por lo que si sustituimos en $m$ la derivada y $b$ con el valor anterior de la ecuación diferencial, obtendremos algo como:
$$
\overbrace{y_{i+1}}^{\text{nuevo valor de }y} = \overbrace{y_i}^{\text{viejo valor de }y} + \overbrace{\frac{dy}{dx}}^{\text{pendiente}} \overbrace{\Delta x}^{\text{distancia en }x}
$$
pero conocemos el valor de $\frac{dy}{dx}$, es nuestra ecuación diferencial; por lo que podemos escribir esto como:
$$
y_{i+1} = y_i + F(x_i, y_i) \Delta x
$$
Resolvamos algunas iteraciones de nuestro sistema; empecemos haciendo 10 iteraciones a lo largo de 10 segundos, con condiciones iniciales $x(0) = 1$, eso quiere decir que:
$$
\begin{align}
\Delta t &= 1 \
x(0) &= 1 \
\dot{x}(0) &= 1
\end{align}
$$
End of explanation
x3 = # Escribe el codigo de tus calculos aqui
from pruebas_2 import prueba_2_1
prueba_2_1(x0, x1, x2, x3, _)
Explanation: Ejercicio
Crea codigo para una iteración mas con estos mismos parametros y despliega el resultado.
End of explanation
x0 = 1
n = 20
Δt = 10/n
F = lambda x : -x
x1 = x0 + F(x0)*Δt
x1
x2 = x1 + F(x1)*Δt
x2
x3 = x2 + F(x2)*Δt
x3
Explanation: Momento... que esta pasando? Resulta que este $\Delta t$ es demasiado grande, intentemos con 20 iteraciones:
$$
\begin{align}
\Delta t &= 0.5 \
x(0) &= 1
\end{align}
$$
End of explanation
xs = [x0]
for t in range(20):
xs.append(xs[-1] + F(xs[-1])*Δt)
xs
Explanation: Esto va a ser tardado, mejor digamosle a Python que es lo que tenemos que hacer, y que no nos moleste hasta que acabe, podemos usar un ciclo for y una lista para guardar todos los valores de la trayectoria:
End of explanation
%matplotlib inline
from matplotlib.pyplot import plot
Explanation: Ahora que tenemos estos valores, podemos graficar el comportamiento de este sistema, primero importamos la libreria matplotlib:
End of explanation
plot(xs);
Explanation: Mandamos a llamar la función plot:
End of explanation
from numpy import linspace, exp
ts = linspace(0, 10, 20)
plot(xs)
plot(exp(-ts));
Explanation: Sin embargo debido a que el periodo de integración que utilizamos es demasiado grande, la solución es bastante inexacta, podemos verlo al graficar contra la que sabemos es la solución de nuestro problema:
End of explanation
xs = [x0]
n = 100
Δt = 10/n
for t in range(100):
xs.append(xs[-1] + F(xs[-1])*Δt)
ts = linspace(0, 10, 100)
plot(xs)
plot(exp(-ts));
Explanation: Si ahora utilizamos un numero de pedazos muy grande, podemos mejorar nuestra aproximación:
End of explanation
from scipy.integrate import odeint
F = lambda x, t : -x
x0 = 1
ts = linspace(0, 10, 100)
xs = odeint(func=F, y0=x0, t=ts)
plot(ts, xs);
Explanation: odeint
Este método funciona tan bien, que ya viene programado dentro de la libreria scipy, por lo que solo tenemos que importar esta librería para utilizar este método.
Sin embargo debemos de tener cuidado al declarar la función $F(x, t)$. El primer argumento de la función se debe de referir al estado de la función, es decir $x$, y el segundo debe de ser la variable independiente, en nuestro caso el tiempo.
End of explanation
ts = # Escribe aqui el codigo que genera un arreglo de puntos equidistantes (linspace)
x0 = # Escribe el valor de la condicion inicial
# Importa las funciones de librerias que necesites aqui
G = lambda x, t: # Escribe aqui el codigo que describe los calculos que debe hacer la funcion
xs = # Escribe aqui el comando necesario para simular la ecuación diferencial
plot(ts, xs);
from pruebas_2 import prueba_2_2
prueba_2_2(ts, xs)
Explanation: Ejercicio
Grafica el comportamiento de la siguiente ecuación diferencial.
$$
\dot{x} = x^2 - 5 x + \frac{1}{2} \sin{x} - 2
$$
Nota: Asegurate de impotar todas las librerias que puedas necesitar
End of explanation
from sympy import var, Function, dsolve
from sympy.physics.mechanics import mlatex, mechanics_printing
mechanics_printing()
var("t")
x = Function("x")(t)
x, x.diff(t)
solucion = dsolve(x.diff(t) + x, x)
solucion
Explanation: Sympy
Y por ultimo, hay veces en las que incluso podemos obtener una solución analítica de una ecuación diferencial, siempre y cuando cumpla ciertas condiciones de simplicidad.
End of explanation
# Declara la variable independiente de la ecuación diferencial
var("")
# Declara la variable dependiente de la ecuación diferencial
= Function("")()
# Escribe la ecuación diferencial con el formato necesario (Ecuacion = 0)
# adentro de la función dsolve
sol = dsolve()
sol
from pruebas_2 import prueba_2_3
prueba_2_3(sol)
Explanation: Ejercicio
Implementa el codigo necesario para obtener la solución analítica de la siguiente ecuación diferencial:
$$
\dot{x} = x^2 - 5x
$$
End of explanation
from numpy import matrix, array
def F(X, t):
A = matrix([[0, 1], [-1, -1]])
B = matrix([[0], [1]])
return array((A*matrix(X).T + B).T).tolist()[0]
ts = linspace(0, 10, 100)
xs = odeint(func=F, y0=[0, 0], t=ts)
plot(xs);
Explanation: Solución a ecuaciones diferenciales de orden superior
Si ahora queremos obtener el comportamiento de una ecuacion diferencial de orden superior, como:
$$
\ddot{x} = -\dot{x} - x + 1
$$
Tenemos que convertirla en una ecuación diferencial de primer orden para poder resolverla numericamente, por lo que necesitaremos convertirla en una ecuación diferencial matricial, por lo que empezamos escribiendola junto con la identidad $\dot{x} = \dot{x}$ en un sistema de ecuaciones:
$$
\begin{align}
\dot{x} &= \dot{x} \
\ddot{x} &= -\dot{x} - x + 1
\end{align}
$$
Si extraemos el operador derivada del lado izquierda, tenemos:
$$
$$
\begin{align}
\frac{d}{dt} x &= \dot{x} \
\frac{d}{dt} \dot{x} &= -\dot{x} - x + 1
\end{align}
$$
$$
O bien, de manera matricial:
$$
\frac{d}{dt}
\begin{pmatrix}
x \
\dot{x}
\end{pmatrix} =
\begin{pmatrix}
0 & 1 \
-1 & -1
\end{pmatrix}
\begin{pmatrix}
x \
\dot{x}
\end{pmatrix} +
\begin{pmatrix}
0 \
1
\end{pmatrix}
$$
Esta ecuación ya no es de segundo orden, es de hecho, de primer orden, sin embargo nuestra variable ha crecido a ser un vector de estados, por el momento le llamaremos $X$, asi pues, lo podemos escribir como:
$$
\frac{d}{dt} X = A X + B
$$
en donde:
$$
A = \begin{pmatrix}
0 & 1 \
-1 & -1
\end{pmatrix} \quad \text{y} \quad B =
\begin{pmatrix}
0 \
1
\end{pmatrix}
$$
y de manera similar, declarar una función para dar a odeint.
End of explanation
def G(X, t):
A = # Escribe aqui el codigo para la matriz A
B = # Escribe aqui el codigo para el vector B
return array((A*matrix(X).T + B).T).tolist()[0]
ts = linspace(0, 10, 100)
xs = odeint(func=G, y0=[0, 0], t=ts)
plot(xs);
from pruebas_2 import prueba_2_4
prueba_2_4(xs)
Explanation: Ejercicio
Implementa la solución de la siguiente ecuación diferencial, por medio de un modelo en representación de espacio de estados:
$$
\ddot{x} = -8\dot{x} - 15x + 1
$$
Nota: Tomalo con calma y paso a paso
* Empieza anotando la ecuación diferencial en tu cuaderno, junto a la misma identidad del ejemplo
* Extrae la derivada del lado izquierdo, para que obtengas el estado de tu sistema
* Extrae las matrices A y B que corresponden a este sistema
* Escribe el codigo necesario para representar estas matrices
End of explanation
from control import tf, step
F = tf([0, 0, 1], [1, 1, 1])
xs, ts = step(F)
plot(ts, xs);
Explanation: Funciones de transferencia
Sin embargo, no es la manera mas facil de obtener la solución, tambien podemos aplicar una transformada de Laplace, y aplicar las funciones de la libreria de control para simular la función de transferencia de esta ecuación; al aplicar la transformada de Laplace, obtendremos:
$$
G(s) = \frac{1}{s^2 + s + 1}
$$
End of explanation
G = tf([], []) # Escribe los coeficientes de la función de transferencia
xs, ts = step(G)
plot(ts, xs);
from pruebas_2 import prueba_2_5
prueba_2_5(ts, xs)
Explanation: Ejercicio
Modela matematicamente la ecuación diferencial del ejercicio anterior, usando una representación de función de transferencia.
Nota: De nuevo, no desesperes, escribe tu ecuación diferencial y aplica la transformada de Laplaca tal como te enseñaron tus abuelos hace tantos años...
End of explanation |
15,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported
Step1: You can do a lot with the numpy module. Below is an example to jog your memory
Step2: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
Step3: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
Distributions and Histograms
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following
Step4: Let's generate a numpy array of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
Step5: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array
Step6: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following
Step7: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation
Step8: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
Step9: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
Step10: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT
Step11: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
Step12: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
Step13: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
Step14: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
Step15: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
Step16: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http
Step17: Next, plot a histogram of this data set (play around with the number of bins, too).
Step18: Now, calculate and print the mean and standard deviation of this distribution. | Python Code:
import numpy as np
Explanation: Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported:
End of explanation
np.linspace(0,10,11)
Explanation: You can do a lot with the numpy module. Below is an example to jog your memory:
End of explanation
# your code here
#start by defining the length of the array
arrayLength = 10
#let's set the array to currently be an array of 0s
myArray = np.zeros(arrayLength) #make a numpy array of 10 zeros
# Let's define the first element of the array
myArray[0] = 1
i = 1 #with the first element defined, we can calculate the rest of the sequence beginning with the 2nd element
while i < arrayLength:
myArray[i] = myArray[i-1]+2
i = i + 1
print(myArray)
Explanation: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
End of explanation
import numpy as np
Explanation: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
Distributions and Histograms
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following:
the uniform distribution: np.random.rand
the normal (Gaussian) distribution: np.random.randn
(notice the "n" that distinguishes the functions for generating normal vs. uniform distributions)
A. Generating distributions
Let's start with the uniform distribution (rand), which gives numbers uniformly distributed over the interval [0,1).
If you haven't already, import the numpy module.
End of explanation
np.random.rand(5)
Explanation: Let's generate a numpy array of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
End of explanation
np.random.rand(5,5)
Explanation: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array:
End of explanation
np.random.randn(5)
Explanation: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following:
$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$
where $\mu$ is the mean and $\sigma$ is the standard deviation.
Don't worry about memorizing this equation, but do know that it exists and that numbers can be randomly drawn from it.
In python, the command np.random.randn selects numbers from the "standard" normal distribution.
All this means is that, in the equation above, $\mu$ (mean) = 0 and $\sigma$ (standard deviation ) 1. randn takes the size of the output as an argument just like rand does.
Try running the cell below to see the numbers you get from a normal distribution.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation: http://matplotlib.org/1.2.1/api/pyplot_api.html?highlight=hist#matplotlib.pyplot.hist
Understanding distributions is perhaps best done by plotting them in a histogram. Lucky for us, matplotlib makes that very simple for us.
To make a histogram, we use the command plt.hist, which takes -- at minimum -- a vector of values that we want to plot as a histogram. We can also specify the number of bins.
First things first: let's import matplotlib:
End of explanation
#your code here
X = np.random.rand(5000)
Explanation: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
End of explanation
plt.hist(X, bins=20)
Explanation: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
End of explanation
#your code here
X = np.random.randn(5000)
plt.hist(X, bins=50)
Explanation: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT: You will use a similar format as above when you defined and plotted a uniform distribution.
End of explanation
mu = 10 #the mean of the distribution
sigma = 1 #the standard deviation
X = sigma * np.random.randn(5000) + mu
plt.hist(X,bins=50)
Explanation: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
End of explanation
#write your observations here
Explanation: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
End of explanation
N,bins,patches = plt.hist(X, bins=50)
len(N)
len(bins)
print(N)
print(bins)
Explanation: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
End of explanation
bin_avg = (bins[1:]+bins[:-1])/2
plt.plot(bin_avg, N, 'r*')
plt.show()
Explanation: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
End of explanation
mean = np.mean(X)
std = np.std(X)
print('mean: '+ repr(mean) )
print('standard deviation: ' + repr(std))
Explanation: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
End of explanation
lifetimes = np.loadtxt('Data/LifetimeData.txt')
Explanation: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http://www.nature.com/articles/ncomms11820 if you are so inclined. This data is from Fig. 6a).
Do you remember learning how to import data in yesterday's Lecture 2? The command you want to use is np.loadtxt. The data we'll be working with is called LifetimeData.txt, and it's located in the Data folder.
End of explanation
#your code here
N,bins,patches = plt.hist(lifetimes,bins=40)
Explanation: Next, plot a histogram of this data set (play around with the number of bins, too).
End of explanation
#your code here
mean = np.mean(lifetimes)
std = np.std(lifetimes)
print("mean: "+repr(mean))
print("standard deviation: "+repr(std))
Explanation: Now, calculate and print the mean and standard deviation of this distribution.
End of explanation |
15,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #1
This notebook contains the first homework for this class, and is due on Sunday, January 31st, 2016 at 11
Step1: Section 2 | Python Code:
# write any code you need here!
# Create additional cells if you need them by using the
# 'Insert' menu at the top of the browser window.
Explanation: Homework #1
This notebook contains the first homework for this class, and is due on Sunday, January 31st, 2016 at 11:59 p.m.. Please make sure to get started early, and come by the instructors' office hours if you have any questions. Office horus and locations can be found in the course syllabus. IMPORTANT: While it's fine if you talk to other people in class about this homework - and in fact we encourage it! - you are responsible for creating the solutions for this homework on your own, and each student must submit their own homework assignment.
Some links that you may find helpful:
Markdown tutorial
The matplotlib website
The matplotlib figure gallery (this is particularly helpful for getting ideas!)
The Pyplot tutoiral
Your name
Put your name here!
Section 1: Carbon dioxide
Part 1. Consider this: How much carbon dioxide does a square kilometer of forest remove from the Earth's atmosphere each year? And, how does that compare to the amount of carbon dioxide that a car adds to the atmosphere each year?
Come up with a simple order-of-magnitude approximation for each of those two questions, and in the cell below this one write a paragraph or two addressing each of the two questions above. What are the factors you need to consider? What range of values might they have? In what way is your estimate limited? (Also, to add a twist: does it matter how old the trees in the forst are, or the car?)
Note: if you use a Google search or two to figure out what range of values you might want to use, include links to the relevant web page. You can either just paste the URL, or do something prettier, like this: google!. The syntax for that second one is [google!](http://google.com).
put your answer here!
Part 2. In the space below, write a Python program to model the answer to both of those questions, and keep track of the answers in a numpy array. Plot your answers to both questions in some convenient way (probably not a scatter plot - look at the matplotlib gallery for inspiration!). Do the answers you get make sense to you?
End of explanation
# Create any Python and Markdown cells you need
# to write your letter, do calculations, and make figures
# You can add more cells using the 'Insert' menu
# Note: you do not actually have to send this letter, but you can if you want!
Explanation: Section 2: Get the Lead Out, continued
As described in the in-class assignment on this subject, You're going to create a letter to send to the Governor's office based on the data anlysis you did in class and what you do here. Use the rest of this notebook (starting with the "Your Document to the Governor's Office") to write that letter. Consider this core question:
Did water lead levels exceed the EPA's action limits? And if they did, how can we understand how badly it exceeded the limits?
Your document should be about 3-4 paragraphs long. You're encouraged to use code and results from your in-class work in your document. And, your should do the following:
State your position on whether lead levels exceeded EPA limits. Make it clear what your investigation found.
Justify your position with graphics and written analysis to explain why you think what you think.
Consider counterarguments. Could someone try to use the same data to arrive at a different conclusion than yours? If they could, explain why you think that position is flawed.
Remember: This is real data. So,
The conclusions you draw matter. These are Flint resident's actual living conditions.
You may find other results online, but you still have to do your own analysis to decide whether you agree with their results.
Any numerical conclusions you draw should be backed up by your code. If you say the average lead level was below EPA limits, you'll need to be able to back up that claim in your notebook either with graphical evidence or numerical evidence (calculations).
Your Letter to the Governor's Office
End of explanation |
15,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering Astronomical Sources
The objective of this hands-on activity is to cluster a set of candidate sources from the Zwicky Transient Facility's (ZTF) image subtraction pipeline. All candidate features and postage stamps were extracted from ZTF's public alert stream.
The goal of this exercise is to become familiar with the ZTF data, the examination of some of its features, and running sklearn's KMeans algorithm on 2 or more features. Here are the steps we will take
Step1: 0b. Data Location
You will need the following files
Step2: 1. Load Data
We are ready to get started!
Step3: 2. Plot Features
We will perform K-means clustering using two features
Step4: 3. KMeans Using Two Features
We rarely ever cluster only two features from a dataset. However, the advantage of doing so is that we can readily visualize two-dimensional data. Let's start off by clustering features elong and chipsf with KMeans. The plotKMeans function below implements a visualization of KMean's partitioning that was used in sklearn's KMean's demo.
Question
Step5: 4. Feature Scaling
We just discovered that distance metrics can be sensitive to the scale of your data (e.g., some features span large numeric ranges, but others don't). For machine learning methods that calculate similiarty between feature vectors, it is important to normalize data within a standard range such as (0, 1) or with z-score normalization (scaling to unit mean and variance). Fortunately, sklearn also makes this quite easy. Please review sklearn's preprocessing module options, specifically StandardScaler which corresponds to z-score normalization and MinMaxScaler. Please implement one.
After your data has been scaled, scatter plot your rescaled features, and run KMeans with the transformed data. Compare the results on the transformed data with those above.
Step6: 5. Quantitative Cluster Evaluation
So far, we've been visually verifying our clusters. Let's use quantitative methods to verify our results.
The following is a score that does not require labels
Step7: 6. Cluster Evaluation by Visual Inspection
This time with postage stamps!
It can be tempting to let yourself be guided by metrics alone, and the metrics are useful guideposts that can help determine whether you're moving in the right direction. However, the goal of clustering is to reveal structure in your dataset. Fortunately, because the features were extracted from sources that were extracted from images, we can view the cutouts from each source to visually verify whether our clusters contain homogeneous objects.
The display methods below give you an opportunity to display random candidates from each cluster, or the candidates that are closest to the cluster center.
Step8: 7. Clustering in a Dimensionally-Reduced Space
Given the tools seen above, starting clustering more than 2 features at a time. This work is free-form. I'll start you off with some suggested features. After plotting the feature distributions, you may choose to down-select further.
Because we're now working with more than 2 features, use PCA to project the feature space onto its first two principal components. You may use the methods above to run KMeans in that reduced feature space and evaluate your results. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import glob
import os
from time import time
from matplotlib.pyplot import imshow
from matplotlib.image import imread
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn import metrics
from sklearn.metrics.pairwise import euclidean_distances
Explanation: Clustering Astronomical Sources
The objective of this hands-on activity is to cluster a set of candidate sources from the Zwicky Transient Facility's (ZTF) image subtraction pipeline. All candidate features and postage stamps were extracted from ZTF's public alert stream.
The goal of this exercise is to become familiar with the ZTF data, the examination of some of its features, and running sklearn's KMeans algorithm on 2 or more features. Here are the steps we will take:
Load data
Plot Features 'elong' and 'chipsf'
Run KMeans on 2 Features
Feature Scaling
Evaluation Results Quantitatively
Evaluate Results by Examining Postage Stamps
Clustering in a Dimensionally-Reduced Space
0a. Imports
These are all the imports that will be used in this notebook. All should be available in the DSFP conda environment.
End of explanation
F_META = # complete
F_FEATS = # complete
D_STAMPS = # complete
Explanation: 0b. Data Location
You will need the following files:
- dsfp_ztf_meta.npy
- dsfp_ztf_feats.npy
- dsfp_ztf_png_stamps.tar.gz
You will need to unzip and unpack this last file (a "tarball") called dsfp_ztf_png_stamps.tar.gz. Run the following commands in the same directory as this notebook to unpack everything (note - some operating systems automatically unzip downloaded files):
gunzip dsfp_ztf_png_stamps.tar.gz
tar -xvf dsfp_ztf_png_stamps.tar
You should now have a directory in your current working directory (cwd) called dsfp_ztf_png_stamps.
Please specify the following file locations:
End of explanation
meta = np.load(F_META)
feats = np.load(F_FEATS)
COL_NAMES = ['diffmaglim', 'magpsf', 'sigmapsf', 'chipsf', 'magap', 'sigmagap',
'distnr', 'magnr', 'sigmagnr', 'chinr', 'sharpnr', 'sky',
'magdiff', 'fwhm', 'classtar', 'mindtoedge', 'magfromlim', 'seeratio',
'aimage', 'bimage', 'aimagerat', 'bimagerat', 'elong', 'nneg',
'nbad', 'ssdistnr', 'ssmagnr', 'sumrat', 'magapbig', 'sigmagapbig',
'ndethist', 'ncovhist', 'jdstarthist', 'jdendhist', 'scorr', 'label']
# INSTRUCTION: Verify that feats has the same number of columns as COL_NAMES
#
Explanation: 1. Load Data
We are ready to get started! :) Start by loading the data and confirming that feats has the same number of columns as COL_NAMES. Please note that the last columns is a class label with values {0, 1}, where 0=bogus, and 1=real. Today we are doing unsupervised learning, but some clustering evaluation methods use labels to quantitatively measure the quality of the clustering result.
End of explanation
featnames_to_select = ['chipsf', 'elong']
# Extract the Correct Features
#
featidxs_to_select_indices = [ COL_NAMES.index(x) for x in featnames_to_select]
feats_selected = feats[:,featidxs_to_select_indices]
# Scatter Plot the Two Features
#
def plot_scatter(dat, xlabel, ylabel, xscale='linear', yscale='linear'):
plt.plot(dat[:,0], dat[:,1], 'k.')
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.xscale(xscale)
plt.yscale(yscale)
plt.show()
# Scatter Plot the Two Features
#
def plot_histogram(dat, bins, title, xscale='linear', yscale='linear'):
plt.hist(dat, bins)
plt.xscale(xscale)
plt.yscale(yscale)
plt.title(title)
plt.show()
# INSTRUCTION: Scatter Plot the Data
#
# INSTRUCTION: Plot the Histograms for both features. Hint, it may be helpful to plot some features on a log scale.
#
Explanation: 2. Plot Features
We will perform K-means clustering using two features: 'chipsf' and 'elong'. Chipsf is the uncertainty associated with performing PSF-fit photometry. The higher the chi values, the more uncertainty associated with the source's PSF fit. Elong is a measure of how elongated the source is. A transient point source should have a spherical point spread function. An elongated point source may be a sign of a problem with image subtraction.
Extract features chipsf and along from the data. Scatter plot them together, and also plot their histograms.
Question: What do you notice about these features?
End of explanation
def runKMeans(dat, n_clusters=2, seed=0):
return KMeans(n_clusters, random_state=seed).fit(dat)
def plotKMeans(kmeans_res, reduced_dat, xlabel, ylabel, xscale='linear', yscale='linear'):
# Plot the decision boundary. For that, we will assign a color to each
h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = reduced_dat[:, 0].min() - 1, reduced_dat[:, 0].max() + 1
y_min, y_max = reduced_dat[:, 1].min() - 1, reduced_dat[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans_res.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_dat[:,0], reduced_dat[:,1], 'k.')
plt.scatter(kmeans_res.cluster_centers_[:, 0], kmeans_res.cluster_centers_[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.xscale(xscale)
plt.yscale(yscale)
plt.show()
# INSTRUCTION: Use the runKMeans and plotKMeans functions to cluster the data (feats_selected)
# with several values of k.
Explanation: 3. KMeans Using Two Features
We rarely ever cluster only two features from a dataset. However, the advantage of doing so is that we can readily visualize two-dimensional data. Let's start off by clustering features elong and chipsf with KMeans. The plotKMeans function below implements a visualization of KMean's partitioning that was used in sklearn's KMean's demo.
Question: What do you think about the quality of the clusterings produced?
End of explanation
# INSTRUCTION: Re-scale your data using either the MinMaxScaler or StandardScaler from sklearn
#
# INSTRUCTION: Scatter plot your rescaled data
#
# INSTRUCTION: Retry KMeans with the same values of k used above.
#
Explanation: 4. Feature Scaling
We just discovered that distance metrics can be sensitive to the scale of your data (e.g., some features span large numeric ranges, but others don't). For machine learning methods that calculate similiarty between feature vectors, it is important to normalize data within a standard range such as (0, 1) or with z-score normalization (scaling to unit mean and variance). Fortunately, sklearn also makes this quite easy. Please review sklearn's preprocessing module options, specifically StandardScaler which corresponds to z-score normalization and MinMaxScaler. Please implement one.
After your data has been scaled, scatter plot your rescaled features, and run KMeans with the transformed data. Compare the results on the transformed data with those above.
End of explanation
sample_size = 300
def bench_k_means(estimator, name, data, labels):
t0 = time()
estimator.fit(data)
print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean',
sample_size=sample_size)))
labels = feats[:,-1]
print(82 * '_')
print('init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette')
# INSTRUCTIONS: Use the bench_k_means method to compare your clustering results
#
Explanation: 5. Quantitative Cluster Evaluation
So far, we've been visually verifying our clusters. Let's use quantitative methods to verify our results.
The following is a score that does not require labels:
- inertia: "Sum of squared distances of samples to their closest cluster center."
- Silhouette coefficient: Measures minimal inertia in ratio to distance to next nearest cluster. The score is higher are clusters become more compact and well-separated.
The following scores do require labels, and are documented here.
ARI, AMI measure the similarity between ground_truth labels and predicted_labels. ARI measure similarity, and AMI measures in terms of mutual information. Random assignments score close to 0, correct assignments close to 1.
homogeneity: purity of the cluster (did all cluster members have the same label?). Scores in [0,1] where 0 is bad.
completeness: did all labels cluster together in a single cluster? Scores in [0,1] where 0 is bad.
End of explanation
def display_stamps(candids, fig_title):
# display five across
num_per_row = 5
for i, candid in enumerate(candids):
f_stamp = glob.glob(os.path.join(D_STAMPS, 'candid{}*.png'.format(candid)))[0] # there should only be one file returned!
if (i % num_per_row) == 0:
fig = plt.figure(figsize=(18, 3))
fig.suptitle(fig_title)
ax = fig.add_subplot(1, num_per_row, i%num_per_row + 1)
ax.set_axis_off()
ax.set_title(candid)
stamp = imread(f_stamp)
imshow(stamp)
return
def closest_to_centroid(centroid, cluster_feats, cluster_candids):
dists = euclidean_distances(cluster_feats, centroid.reshape(1, -1))[:,0]
closest_indices = np.argsort(dists)[:10]
return cluster_candids[closest_indices]
def show_cluster_stamps(kmeans_res, displayMode='closest', num_to_display=10):
# spits out a random selection of stamps from each cluster
for i in range(kmeans_res.n_clusters):
centroid = kmeans_res.cluster_centers_[i, :]
mask = kmeans_res.labels_ == i
cluster_candids = meta[mask]['candid']
cluster_feats = feats_selected_scaled[mask]
if displayMode == 'near_centroid':
selected_candids = closest_to_centroid(centroid, cluster_feats, cluster_candids)
if displayMode == 'random':
np.random.shuffle(cluster_candids)
selected_candids = cluster_candids[:num_to_display]
display_stamps(selected_candids, 'Cluster {}'.format(i))
# INSTRUCTION: Use the show_cluster_stamps method to display cutouts associated with each cluster.
# Do you see similar objects in each cluster?
#
Explanation: 6. Cluster Evaluation by Visual Inspection
This time with postage stamps!
It can be tempting to let yourself be guided by metrics alone, and the metrics are useful guideposts that can help determine whether you're moving in the right direction. However, the goal of clustering is to reveal structure in your dataset. Fortunately, because the features were extracted from sources that were extracted from images, we can view the cutouts from each source to visually verify whether our clusters contain homogeneous objects.
The display methods below give you an opportunity to display random candidates from each cluster, or the candidates that are closest to the cluster center.
End of explanation
featnames_to_select = ['chipsf', 'elong', 'diffmaglim', 'magpsf', 'sigmapsf',
'chipsf', 'magap', 'sigmagap', 'sky', 'magdiff', 'fwhm',
'mindtoedge', 'magfromlim', 'seeratio', 'aimage', 'bimage',
'aimagerat', 'bimagerat', 'elong', 'nneg', 'nbad', 'sumrat', 'magapbig', 'sigmagapbig']
# INSTRUCTION: Visualize these features. Discard any you consider to be problematic.
# INSTRUCTION: Filter the feature space
# INSTRUCTION: Run PCA on this feature space to reduce it to 2 principal components
# INSTRUCTION: Run KMeans on this 2-dimensional PCA space, and evaluate your results both quantatively and qualitatively.
Explanation: 7. Clustering in a Dimensionally-Reduced Space
Given the tools seen above, starting clustering more than 2 features at a time. This work is free-form. I'll start you off with some suggested features. After plotting the feature distributions, you may choose to down-select further.
Because we're now working with more than 2 features, use PCA to project the feature space onto its first two principal components. You may use the methods above to run KMeans in that reduced feature space and evaluate your results.
End of explanation |
15,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Name Competition with Gradient Boosting
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* The use of a gradient boosting classifier from the scikit-learn library to perform classification as in the name competition (assign "+" if second letter is vowel)
Step1: Load file from data and convert to training set and test set (reading from two distinct files)
Step2: A simple class that converts the string into numbers and than trains a simple classifier using the gradient boosting technique. The resulting gradient boosting classifier is essentially a rule-based system, where the results are derived from the inputs to the classifier.
The main take-away message is that rule-based systems perform extremely well, if the underlying data follows a process that results from simple rules (as are often encountered in practice). For example, medical diagnosis systems follow very clear rules and hence often benefit by training from gradient boosting classifiers. | Python Code:
import numpy as np
import string
from sklearn.ensemble import GradientBoostingClassifier
Explanation: Name Competition with Gradient Boosting
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* The use of a gradient boosting classifier from the scikit-learn library to perform classification as in the name competition (assign "+" if second letter is vowel)
End of explanation
def read_file(filename):
with open(filename) as f:
content = f.readlines()
y = [line[0] for line in content]
X = [line[2:].strip() for line in content]
return X,y
X_train,y_train = read_file('Names_data_train.txt')
X_test,y_test = read_file('Names_data_test.txt')
Explanation: Load file from data and convert to training set and test set (reading from two distinct files)
End of explanation
class Gradient_Boosting_Estimator():
'''
Class for training a gradient boosting, rule-based estimator on the letters
Parameter is the number of letters of the word to consider
'''
def __init__( self, letters ):
self.letters = letters
self.gbes = GradientBoostingClassifier()
# compute the histograms P(Y) (stored in self.Py) and P(x_i|y) (stored in self.Px)
def fit( self, X, y) :
# convert to numeric entries
ty = np.zeros( len(y) )
for k in range( len(y) ):
if y[k]=='+':
ty[k] = 1
tX = np.empty( (0, self.letters) )
for mys in X:
if len(mys) < self.letters:
# add spaces if string is too short
mys += ( ' ' * (self.letters-len(mys) ) )
tX = np.vstack( (tX, [ord(x) for x in mys[0:self.letters] ] ) )
# fit the classifier (taken from the SciKit-Learn library)
gbes.fit(tX, ty)
# perform the prediction based on maximizing P(y)\prod P(x_i|y)
# normally, the prediction would be done in logairthmic domain, but here we just use plain probabilities
def predict(self, X):
rety = ['+' for _ in X]
for idx, elem_X in enumerate(X):
# add spaces if string is too short
elem_X += ( ' ' * max(0,self.letters-len(elem_X) ) )
elem_numeric = np.array([ord(x) for x in elem_X[0:self.letters]])
rv = gbes.predict(elem_numeric.reshape(1,-1))
if rv == 0:
rety[idx] = '-'
return rety
clf = Gradient_Boosting_Estimator(10)
clf.fit(X_train,y_train)
y = clf.predict(X_test)
errors = 0
for idx,value in enumerate(y_test):
print(value,'predicted as:', y[idx], ' (',X_test[idx],')')
if value != y[idx]:
errors += 1
print('Prediction errors: %d (error rate %1.2f %%)' % (errors, errors/len(y)*100))
# find optimal number of errors
for letter in range(1,10):
clf = Gradient_Boosting_Estimator(letter)
clf.fit(X_train,y_train)
y = clf.predict(X_test)
errors = 0
for idx,k in enumerate(y_test):
if k != y[idx]:
errors += 1
print('%d letters: %d prediction errors (error rate %1.2f %%)' % (letter, errors,errors*100/len(y_test)))
# Train with 5 letters
clf = Gradient_Boosting_Estimator(5)
clf.fit(X_train,y_train)
print(clf.predict(['Xavier Jones']))
Explanation: A simple class that converts the string into numbers and than trains a simple classifier using the gradient boosting technique. The resulting gradient boosting classifier is essentially a rule-based system, where the results are derived from the inputs to the classifier.
The main take-away message is that rule-based systems perform extremely well, if the underlying data follows a process that results from simple rules (as are often encountered in practice). For example, medical diagnosis systems follow very clear rules and hence often benefit by training from gradient boosting classifiers.
End of explanation |
15,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step6: We need to define some parameters
Step7: We should check if everithing is OK with an alternation histogram
Step8: If the plot looks good we can apply the parameters with
Step9: Measurements infos
All the measurement data is in the d variable. We can print it
Step10: Or check the measurements duration
Step11: Compute background
Compute the background using automatic threshold
Step12: Burst search and selection
Step14: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
Step15: Gaussian Fit
Fit the histogram with a gaussian
Step16: KDE maximum
Step17: Leakage summary
Step18: Burst size distribution
Step19: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step20: Weighted mean of $E$ of each burst
Step21: Gaussian fit (no weights)
Step22: Gaussian fit (using burst size as weights)
Step23: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step24: The Maximum likelihood fit for a Gaussian population is the mean
Step25: Computing the weighted mean and weighted standard deviation we get
Step26: Save data to file
Step27: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step28: This is just a trick to format the different variables | Python Code:
ph_sel_name = "Dex"
data_id = "12d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:35:46 2017
Duration: 8 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
Explanation: Burst search and selection
End of explanation
def hsm_mode(s):
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
Explanation: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
End of explanation
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
Explanation: Gaussian Fit
Fit the histogram with a gaussian:
End of explanation
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
Explanation: KDE maximum
End of explanation
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
Explanation: Leakage summary
End of explanation
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst size distribution
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
15,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Say Hello
Recipe template for say hello.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Say Hello Recipe Parameters
This should be called for testing only.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute Say Hello
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Say Hello
Recipe template for say hello.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'say_first':'Hello Once', # Type in a greeting.
'say_second':'Hello Twice', # Type in a greeting.
'error':'', # Optional error for testing.
'sleep':0, # Seconds to sleep.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Say Hello Recipe Parameters
This should be called for testing only.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'hello':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'say':{'field':{'name':'say_first','kind':'string','order':1,'default':'Hello Once','description':'Type in a greeting.'}},
'error':{'field':{'name':'error','kind':'string','order':3,'default':'','description':'Optional error for testing.'}},
'sleep':{'field':{'name':'sleep','kind':'integer','order':4,'default':0,'description':'Seconds to sleep.'}}
}
},
{
'hello':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'say':{'field':{'name':'say_second','kind':'string','order':1,'default':'Hello Twice','description':'Type in a greeting.'}},
'sleep':{'field':{'name':'sleep','kind':'integer','order':4,'default':0,'description':'Seconds to sleep.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Say Hello
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
15,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deriving a Point-Spread Function in a Crowded Field
following Appendix III of Peter Stetson's User's Manual for DAOPHOT II
Using pydaophot form astwro python package
All italic text here have been taken from Stetson's manual.
The only input file for this procedure is a FITS file containing reference frame image. Here we use sample FITS form astwro package (NGC6871 I filter 20s frame). Below we get filepath for this image, as well as create instances of Daophot and Allstar classes - wrappers around daophot and allstar respectively.
One should also provide daophot.opt, photo.opt and allstar.opt in apropiriete constructors. Here default, build in, sample, opt files are used.
Step1: Daophot object creates temporary working directory (runner directory), which is passed to Allstar constructor to share.
Step2: Daophot got FITS file in construction, which will be automatically ATTACHed.
(1) Run FIND on your frame
Daophot FIND parameters Number of frames averaged, summed are defaulted to 1,1, below are provided for clarity.
Step3: Check some results returned by FIND, every method for daophot command returns results object.
Step4: Also, take a look into runner directory
Step5: We see symlinks to input image and opt files, and i.coo - result of FIND
(2) Run PHOTOMETRY on your frame
Below we run photometry, providing explicitly radius of aperture A1 and IS, OS sky radiuses.
Step6: List of stars generated by daophot commands, can be easily get as astwro.starlist.Starlist being essentially pandas.DataFrame
Step7: Let's check 10 stars with least A1 error (mag_err column). (pandas style)
Step8: (3) SORT the output from PHOTOMETRY
in order of increasing apparent magnitude decreasing
stellar brightness with the renumbering feature. This step is optional but it can be more convenient than not.
SORT command of daophor is not implemented (yet) in pydaohot. But we do sorting by ourself.
Step9: Here we write sorted list back info photometry file at default name (overwriting existing one), because it's convenient to use default files in next commands.
Step10: (4) PICK to generate a set of likely PSF stars
How many stars you want to use is a function of the degree of variation you expect and the frequency with which stars are contaminated by cosmic rays or neighbor stars. [...]
Step11: If no error reported, symlink to image file (renamed to i.fits), and all daophot output files (i.*) are in the working directory of runner
Step12: One may examine and improve i.lst list of PSF stars. Or use astwro.tools.gapick.py to obtain list of PSF stars optimised by genetic algorithm.
*(5) Run PSF *
tell it the name of your complete (sorted renumbered) aperture photometry file, the name of the file with the list of PSF stars, and the name of the disk file you want the point spread function stored in (the default should be fine) [...]
If the frame is crowded it is probably worth your while to generate the first PSF with the "VARIABLE PSF" option set to -1 --- pure analytic PSF. That way, the companions will not generate ghosts in the model PSF that will come back to haunt you later. You should also have specified a reasonably generous fitting radius --- these stars have been preselected to be as isolated as possible and you want the best fits you can get. But remember to avoid letting neighbor stars intrude within one fitting radius of the center of any PSF star.
For illustration we will set VARIABLE PSF option, before PSf()
Step13: (6) Run GROUP and NSTAR or ALLSTAR on your NEI file
If your PSF stars have many neighbors this may take some minutes of real time. Please be patient or submit it as a batch job and perform steps on your next frame while you wait.
We use allstar. (GROUP and NSTAR command are not implemented in current version of pydaophot). We use prepared above Allstar object
Step14: All result objects, has get_buffer() method, useful to lookup unparsed daophot or allstar output
Step15: *(8) EXIT from DAOPHOT and send this new picture to the image display *
Examine each of the PSF stars and its environs. Have all of the PSF stars subtracted out more or less cleanly, or should some of them be rejected from further use as PSF stars? (If so use a text editor to delete these stars from the LST file.) Have the neighbors mostly disappeared, or have they left behind big zits? Have you uncovered any faint companions that FIND missed?[...]
The absolute path to subtracted file (like for most output files) is available as result's property
Step16: We can also generate region file for psf stars
Step17: (9) Back in DAOPHOT II ATTACH the original picture and run SUBSTAR
specifying the file created in step (6) or in step (8f) as the stars to subtract, and the stars in the LST file as the stars to keep.
Lookup into runner dir
Step18: You have now created a new picture which has the PSF stars still in it but from which the known neighbors of these PSF stars have been mostly removed
(10) ATTACH the new star subtracted frame and repeat step (5) to derive a new point spread function
(11+...) Run GROUP NSTAR or ALLSTAR
Step19: Check last image with subtracted PSF stars neighbours.
Step20: Once you have produced a frame in which the PSF stars and their neighbors all subtract out cleanly, one more time through PSF should produce a point-spread function you can be proud of. | Python Code:
from astwro.sampledata import fits_image
frame = fits_image()
Explanation: Deriving a Point-Spread Function in a Crowded Field
following Appendix III of Peter Stetson's User's Manual for DAOPHOT II
Using pydaophot form astwro python package
All italic text here have been taken from Stetson's manual.
The only input file for this procedure is a FITS file containing reference frame image. Here we use sample FITS form astwro package (NGC6871 I filter 20s frame). Below we get filepath for this image, as well as create instances of Daophot and Allstar classes - wrappers around daophot and allstar respectively.
One should also provide daophot.opt, photo.opt and allstar.opt in apropiriete constructors. Here default, build in, sample, opt files are used.
End of explanation
from astwro.pydaophot import Daophot, Allstar
dp = Daophot(image=frame)
al = Allstar(dir=dp.dir)
Explanation: Daophot object creates temporary working directory (runner directory), which is passed to Allstar constructor to share.
End of explanation
res = dp.FInd(frames_av=1, frames_sum=1)
Explanation: Daophot got FITS file in construction, which will be automatically ATTACHed.
(1) Run FIND on your frame
Daophot FIND parameters Number of frames averaged, summed are defaulted to 1,1, below are provided for clarity.
End of explanation
print ("{} pixels analysed, sky estimate {}, {} stars found.".format(res.pixels, res.sky, res.stars))
Explanation: Check some results returned by FIND, every method for daophot command returns results object.
End of explanation
!ls -lt $dp.dir
Explanation: Also, take a look into runner directory
End of explanation
res = dp.PHotometry(apertures=[8], IS=35, OS=50)
Explanation: We see symlinks to input image and opt files, and i.coo - result of FIND
(2) Run PHOTOMETRY on your frame
Below we run photometry, providing explicitly radius of aperture A1 and IS, OS sky radiuses.
End of explanation
stars = res.photometry_starlist
Explanation: List of stars generated by daophot commands, can be easily get as astwro.starlist.Starlist being essentially pandas.DataFrame:
End of explanation
stars.sort_values('mag_err').iloc[:10]
Explanation: Let's check 10 stars with least A1 error (mag_err column). (pandas style)
End of explanation
sorted_stars = stars.sort_values('mag')
sorted_stars.renumber()
Explanation: (3) SORT the output from PHOTOMETRY
in order of increasing apparent magnitude decreasing
stellar brightness with the renumbering feature. This step is optional but it can be more convenient than not.
SORT command of daophor is not implemented (yet) in pydaohot. But we do sorting by ourself.
End of explanation
dp.write_starlist(sorted_stars, 'i.ap')
!head -n20 $dp.PHotometry_result.photometry_file
dp.PHotometry_result.photometry_file
Explanation: Here we write sorted list back info photometry file at default name (overwriting existing one), because it's convenient to use default files in next commands.
End of explanation
pick_res = dp.PIck(faintest_mag=20, number_of_stars_to_pick=40)
Explanation: (4) PICK to generate a set of likely PSF stars
How many stars you want to use is a function of the degree of variation you expect and the frequency with which stars are contaminated by cosmic rays or neighbor stars. [...]
End of explanation
ls $dp.dir
Explanation: If no error reported, symlink to image file (renamed to i.fits), and all daophot output files (i.*) are in the working directory of runner:
End of explanation
dp.set_options('VARIABLE PSF', 2)
psf_res = dp.PSf()
Explanation: One may examine and improve i.lst list of PSF stars. Or use astwro.tools.gapick.py to obtain list of PSF stars optimised by genetic algorithm.
*(5) Run PSF *
tell it the name of your complete (sorted renumbered) aperture photometry file, the name of the file with the list of PSF stars, and the name of the disk file you want the point spread function stored in (the default should be fine) [...]
If the frame is crowded it is probably worth your while to generate the first PSF with the "VARIABLE PSF" option set to -1 --- pure analytic PSF. That way, the companions will not generate ghosts in the model PSF that will come back to haunt you later. You should also have specified a reasonably generous fitting radius --- these stars have been preselected to be as isolated as possible and you want the best fits you can get. But remember to avoid letting neighbor stars intrude within one fitting radius of the center of any PSF star.
For illustration we will set VARIABLE PSF option, before PSf()
End of explanation
alls_res = al.ALlstar(image_file=frame, stars=psf_res.nei_file, subtracted_image_file='is.fits')
Explanation: (6) Run GROUP and NSTAR or ALLSTAR on your NEI file
If your PSF stars have many neighbors this may take some minutes of real time. Please be patient or submit it as a batch job and perform steps on your next frame while you wait.
We use allstar. (GROUP and NSTAR command are not implemented in current version of pydaophot). We use prepared above Allstar object: al operating on the same runner dir that dp.
As parameter we set input image (we haven't do that on constructor), and nei file produced by PSf(). We do not remember name i.psf so use psf_res.nei_file property.
Finally we order allstar to produce subtracted FITS .
End of explanation
print (alls_res.get_buffer())
Explanation: All result objects, has get_buffer() method, useful to lookup unparsed daophot or allstar output:
End of explanation
sub_img = alls_res.subtracted_image_file
Explanation: *(8) EXIT from DAOPHOT and send this new picture to the image display *
Examine each of the PSF stars and its environs. Have all of the PSF stars subtracted out more or less cleanly, or should some of them be rejected from further use as PSF stars? (If so use a text editor to delete these stars from the LST file.) Have the neighbors mostly disappeared, or have they left behind big zits? Have you uncovered any faint companions that FIND missed?[...]
The absolute path to subtracted file (like for most output files) is available as result's property:
End of explanation
from astwro.starlist.ds9 import write_ds9_regions
reg_file_path = dp.file_from_runner_dir('lst.reg')
write_ds9_regions(pick_res.picked_starlist, reg_file_path)
# One can run ds9 directly from notebook:
!ds9 $sub_img -regions $reg_file_path
Explanation: We can also generate region file for psf stars:
End of explanation
ls $al.dir
sub_res = dp.SUbstar(subtract=alls_res.profile_photometry_file, leave_in=pick_res.picked_stars_file)
Explanation: (9) Back in DAOPHOT II ATTACH the original picture and run SUBSTAR
specifying the file created in step (6) or in step (8f) as the stars to subtract, and the stars in the LST file as the stars to keep.
Lookup into runner dir:
End of explanation
for i in range(3):
print ("Iteration {}: Allstar chi: {}".format(i, alls_res.als_stars.chi.mean()))
dp.image = 'is.fits'
respsf = dp.PSf()
print ("Iteration {}: PSF chi: {}".format(i, respsf.chi))
alls_res = al.ALlstar(image_file=frame, stars='i.nei')
dp.image = frame
dp.SUbstar(subtract='i.als', leave_in='i.lst')
print ("Final: Allstar chi: {}".format(alls_res.als_stars.chi.mean()))
alls_res.als_stars
Explanation: You have now created a new picture which has the PSF stars still in it but from which the known neighbors of these PSF stars have been mostly removed
(10) ATTACH the new star subtracted frame and repeat step (5) to derive a new point spread function
(11+...) Run GROUP NSTAR or ALLSTAR
End of explanation
!ds9 $dp.SUbstar_result.subtracted_image_file -regions $reg_file_path
Explanation: Check last image with subtracted PSF stars neighbours.
End of explanation
dp.image = 'is.fits'
psf_res = dp.PSf()
print ("PSF file: {}".format(psf_res.psf_file))
Explanation: Once you have produced a frame in which the PSF stars and their neighbors all subtract out cleanly, one more time through PSF should produce a point-spread function you can be proud of.
End of explanation |
15,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization of pre-generated images.
This is a notebook to load and display pre-generated images used in parameter exploration.
Step1: Now we need to build a function that takes distance, base and value as a parameter and returns the the SLE, STDM, Visualize Cluster Matrix.
Step2: Now we build the widget for each exploration | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.image as mpimg
# Widgets library
from ipywidgets import interact
%matplotlib inline
# We need to load all the files here
# Load the file
folder = '../results/'
name = 'parameter_swep_SLM-0.00-0.00-10.00.png'
file_name = folder + name
image = mpimg.imread(file_name)
# Now let's plot it
figsize = (16, 12)
figure = plt.figure(figsize=figsize)
ax = figure.add_subplot(1, 1, 1)
ax.set_axis_off()
ax.imshow(image)
Explanation: Visualization of pre-generated images.
This is a notebook to load and display pre-generated images used in parameter exploration.
End of explanation
def load_SLM(base, distance, value):
# Load the image
folder = '../results/'
name = 'parameter_swep_SLM'
parameter_marker = '-{0:4.2f}-{1:4.2f}-{2:4.2f}'.format(base, distance, value)
file_name = folder + name + parameter_marker + '.png'
image = mpimg.imread(file_name)
# Plot
figsize = (16, 12)
figure = plt.figure(figsize=figsize)
ax = figure.add_subplot(1, 1, 1)
ax.set_axis_off()
ax.imshow(image)
def load_STDM(base, distance, value):
folder = '../results/'
name = 'parameter_swep_STDM'
parameter_marker = '-{0:5.2f}-{1:5.2f}-{2:5.2f}'.format(base, distance, value)
file_name = folder + name + parameter_marker + '.png'
image = mpimg.imread(file_name)
# Plot
figsize = (16, 12)
figure = plt.figure(figsize=figsize)
ax = figure.add_subplot(1, 1, 1)
ax.set_axis_off()
ax.imshow(image)
def load_cluster(distance, base, value):
folder = '../results/'
name = 'parameter_swep_cluster'
parameter_marker = '-{0:5.2f}-{1:5.2f}-{2:5.2f}'.format(base, distance, value)
file_name = folder + name + parameter_marker + '.png'
image = mpimg.imread(file_name)
# Plot
figsize = (16, 12)
figure = plt.figure(figsize=figsize)
ax = figure.add_subplot(1, 1, 1)
ax.set_axis_off()
ax.imshow(image)
def load_cluster_SLM(base, distance, value):
folder = '../results/'
name = 'parameter_swep_cluster_SLM'
parameter_marker = '-{0:5.2f}-{1:5.2f}-{2:5.2f}'.format(base, distance, value)
file_name = folder + name + parameter_marker + '.png'
image = mpimg.imread(file_name)
# Plot
figsize = (16, 12)
figure = plt.figure(figsize=figsize)
ax = figure.add_subplot(1, 1, 1)
ax.set_axis_off()
ax.imshow(image)
Explanation: Now we need to build a function that takes distance, base and value as a parameter and returns the the SLE, STDM, Visualize Cluster Matrix.
End of explanation
interact(load_SLM, base=(0, 200, 40), distance=(0, 601, 40), value=(10, 200, 38))
interact(load_STDM, base=(0, 200, 40), distance=(0, 601, 40), value=(10, 200, 38))
interact(load_cluster, base=(0, 200, 40), distance=(0, 601, 40), value=(10, 200, 38))
interact(load_cluster_SLM, base=(0, 200, 40), distance=(0, 601, 40), value=(10, 200, 38))
Explanation: Now we build the widget for each exploration
End of explanation |
15,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()# bag of words here
for idx, row in reviews.iterrows():
for word in row[0].split(' '):
total_counts[word] += 1
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)} ## create the word-to-index dictionary here
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
vector = np.zeros(len(vocab), dtype=np.int)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
vector[idx] += 1
return np.array(vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, len(vocab)])
net = tflearn.fully_connected(net, 300, activation='ReLU')
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
15,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run all the cells below to make sure everything is working and ready to go. All cells should run without error.
Test Matplotlib and Plotting
Step1: Test OpenCV
Step2: Test TensorFlow
Step3: Test Moviepy
Step5: Create a new video with moviepy by processing each frame to YUV color space. | Python Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
%matplotlib inline
img = mpimg.imread('test.jpg')
plt.imshow(img)
Explanation: Run all the cells below to make sure everything is working and ready to go. All cells should run without error.
Test Matplotlib and Plotting
End of explanation
import cv2
# convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
plt.imshow(gray, cmap='Greys_r')
Explanation: Test OpenCV
End of explanation
import tensorflow as tf
with tf.Session() as sess:
a = tf.constant(1)
b = tf.constant(2)
c = a + b
# Should be 3
print("1 + 2 = {}".format(sess.run(c)))
Explanation: Test TensorFlow
End of explanation
# Import everything needed to edit/save/watch video clips
import imageio
imageio.plugins.ffmpeg.download()
from moviepy.editor import VideoFileClip
from IPython.display import HTML
Explanation: Test Moviepy
End of explanation
new_clip_output = 'test_output.mp4'
test_clip = VideoFileClip("test.mp4")
new_clip = test_clip.fl_image(lambda x: cv2.cvtColor(x, cv2.COLOR_RGB2YUV)) #NOTE: this function expects color images!!
%time new_clip.write_videofile(new_clip_output, audio=False)
HTML(
<video width="640" height="300" controls>
<source src="{0}" type="video/mp4">
</video>
.format(new_clip_output))
Explanation: Create a new video with moviepy by processing each frame to YUV color space.
End of explanation |
15,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom finite difference coefficients in Devito
Introduction
When taking the numerical derivative of a function in Devito, the default behaviour is for 'standard' finite difference weights (obtained via a Taylor series expansion about the point of differentiation) to be applied. Consider the following example for some field $u(\mathbf{x},t)$, where $\mathbf{x}=(x,y)$. Let us define a computational domain/grid and differentiate our field with respect to $x$.
Step1: Now, lets look at the output of $\partial u/\partial x$
Step2: By default the 'standard' Taylor series expansion result, where h_x represents the $x$-direction grid spacing, is returned. However, there may be instances when a user wishes to use 'non-standard' weights when, for example, implementing a dispersion-relation-preserving (DRP) scheme. See e.g.
[1] Christopher K.W. Tam, Jay C. Webb (1993). ”Dispersion-Relation-Preserving Finite Difference Schemes for Computational Acoustics.” J. Comput. Phys., 107(2), 262--281. https
Step3: Note the addition of the coefficients='symbolic' keyword. Now, when printing $\partial u/\partial x$ we obtain
Step4: Owing to the addition of the coefficients='symbolic' keyword the weights have been replaced by sympy functions. Now, take for example the weight W(x - h_x, 1, u(t, x, y), x), the notation is as follows
Step5: Devito Coefficient ojects take arguments in the following order
Step6: We see that in the above equation the standard weights for the first derivative of u in the $x$-direction have now been replaced with our user defined weights. Note that since no replacement rules were defined for the time derivative (u.dt) standard weights have replaced the symbolic weights.
Now, let us consider a more complete example.
Example
Step7: The seismic wave source term will be modelled as a Ricker Wavelet with a peak-frequency of $25$Hz located at $(1000m,800m)$. Before applying the DRP scheme, we begin by generating a 'reference' solution using a spatially high-order standard finite difference scheme and time step well below the model's critical time-step. The scheme will be 2nd order in time.
Step8: Now let us define our wavefield and PDE
Step9: Now, lets create the operator and execute the time marching scheme
Step10: And plot the result
Step11: We will now reimplement the above model applying the DRP scheme presented in [2].
First, since we wish to apply different custom FD coefficients in the upper on lower layers we need to define these two 'subdomains' using the Devito SubDomain functionality
Step12: We now create our model incoporating these subdomains
Step13: And re-define model related objects. Note that now our wave-field will be defined with coefficients='symbolic'.
Step14: We now create a stencil for each of our 'Upper' and 'Lower' subdomains defining different custom FD weights within each of these subdomains.
Step15: And now execute the operator
Step16: And plot the new results
Step17: Finally, for comparison, lets plot the difference between the standard 20th order and optimized 10th order models | Python Code:
import numpy as np
import sympy as sp
from devito import Grid, TimeFunction
# Create our grid (computational domain)
Lx = 10
Ly = Lx
Nx = 11
Ny = Nx
dx = Lx/(Nx-1)
dy = dx
grid = Grid(shape=(Nx,Ny), extent=(Lx,Ly))
# Define u(x,y,t) on this grid
u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2)
# Define symbol for laplacian replacement
H = sp.symbols('H')
Explanation: Custom finite difference coefficients in Devito
Introduction
When taking the numerical derivative of a function in Devito, the default behaviour is for 'standard' finite difference weights (obtained via a Taylor series expansion about the point of differentiation) to be applied. Consider the following example for some field $u(\mathbf{x},t)$, where $\mathbf{x}=(x,y)$. Let us define a computational domain/grid and differentiate our field with respect to $x$.
End of explanation
print(u.dx.evaluate)
Explanation: Now, lets look at the output of $\partial u/\partial x$:
End of explanation
u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2, coefficients='symbolic')
Explanation: By default the 'standard' Taylor series expansion result, where h_x represents the $x$-direction grid spacing, is returned. However, there may be instances when a user wishes to use 'non-standard' weights when, for example, implementing a dispersion-relation-preserving (DRP) scheme. See e.g.
[1] Christopher K.W. Tam, Jay C. Webb (1993). ”Dispersion-Relation-Preserving Finite Difference Schemes for Computational Acoustics.” J. Comput. Phys., 107(2), 262--281. https://doi.org/10.1006/jcph.1993.1142
for further details. The use of such modified weights is facilitated in Devito via the 'symbolic' finite difference coefficents functionality. Let us start by re-defining the function $u(\mathbf{x},t)$ in the following manner:
End of explanation
print(u.dx.evaluate)
Explanation: Note the addition of the coefficients='symbolic' keyword. Now, when printing $\partial u/\partial x$ we obtain:
End of explanation
from devito import Coefficient, Substitutions # Import the Devito Coefficient and Substitutions objects
# Grab the grid spatial dimensions: Note x[0] will correspond to the x-direction and x[1] to y-direction
x = grid.dimensions
# Form a Coefficient object and then a replacement rules object (to pass to a Devito equation):
u_x_coeffs = Coefficient(1, u, x[0], np.array([-0.6, 0.1, 0.6]))
coeffs = Substitutions(u_x_coeffs)
Explanation: Owing to the addition of the coefficients='symbolic' keyword the weights have been replaced by sympy functions. Now, take for example the weight W(x - h_x, 1, u(t, x, y), x), the notation is as follows:
* The first x - h_x refers to the spatial location of the weight w.r.t. the evaluation point x.
* The 1 refers to the order of the derivative.
* u(t, x, y) refers to the function with which the weight is associated.
* Finally, the x refers to the dimension along which the derivative is being taken.
Symbolic coefficients can then be manipulated using the Devito 'Coefficient' and 'Substitutions' objects. First, let us consider an example where we wish to replace the coefficients with a set of constants throughout the entire computational domain.
End of explanation
from devito import Eq
eq = Eq(u.dt+u.dx, coefficients=coeffs)
print(eq.evaluate)
Explanation: Devito Coefficient ojects take arguments in the following order:
1. Derivative order (in the above example this is the first derivative)
2. Function to which the coefficients 'belong' (in the above example this is the time function u)
3. Dimension on which coefficients will be applied (in the above example this is the x-direction)
4. Coefficient data. Since, in the above example, the coefficients have been applied as a 1-d numpy array replacement will occur at the equation level. (Note that other options are in development and will be the subject of future notebooks).
Now, lets form a Devito equation, pass it the Substitutions object, and take a look at the output:
End of explanation
#NBVAL_IGNORE_OUTPUT
from examples.seismic import Model, plot_velocity
%matplotlib inline
# Define a physical size
Lx = 2000
Lz = Lx
h = 10
Nx = int(Lx/h)+1
Nz = Nx
shape = (Nx, Nz) # Number of grid point
spacing = (h, h) # Grid spacing in m. The domain size is now 2km by 2km
origin = (0., 0.)
# Define a velocity profile. The velocity is in km/s
v = np.empty(shape, dtype=np.float32)
v[:, :121] = 1.5
v[:, 121:] = 4.0
# With the velocity and model size defined, we can create the seismic model that
# encapsulates these properties. We also define the size of the absorbing layer as 10 grid points
nbl = 10
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=20, nbl=nbl, bcs="damp")
plot_velocity(model)
Explanation: We see that in the above equation the standard weights for the first derivative of u in the $x$-direction have now been replaced with our user defined weights. Note that since no replacement rules were defined for the time derivative (u.dt) standard weights have replaced the symbolic weights.
Now, let us consider a more complete example.
Example: Finite difference modeling for a large velocity-contrast acousitc wave model
It is advised to read through the 'Introduction to seismic modelling' notebook located in devito/examples/seismic/tutorials/01_modelling.ipynb before proceeding with this example since much introductory material will be ommited here. The example now considered is based on an example introduced in
[2] Yang Liu (2013). ”Globally optimal finite-difference schemes based on least squares.” GEOPHYSICS, 78(4), 113--132. https://doi.org/10.1190/geo2012-0480.1.
See figure 18 of [2] for further details. Note that here we will simply use Devito to 'reproduce' the simulations leading to two results presented in the aforementioned figure. No analysis of the results will be carried out. The domain under consideration has a sptaial extent of $2km \times 2km$ and, letting $x$ be the horizontal coordinate and $z$ the depth, a velocity profile such that $v_1(x,z)=1500ms^{-1}$ for $z\leq1200m$ and $v_2(x,z)=4000ms^{-1}$ for $z>1200m$.
End of explanation
from examples.seismic import TimeAxis
t0 = 0. # Simulation starts a t=0
tn = 500. # Simulation lasts 0.5 seconds (500 ms)
dt = 1.0 # Time step of 0.2ms
time_range = TimeAxis(start=t0, stop=tn, step=dt)
#NBVAL_IGNORE_OUTPUT
from examples.seismic import RickerSource
f0 = 0.015 # Source peak frequency is 25Hz (0.025 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
# First, position source centrally in all dimensions, then set depth
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 800. # Depth is 800m
# We can plot the time signature to see the wavelet
src.show()
Explanation: The seismic wave source term will be modelled as a Ricker Wavelet with a peak-frequency of $25$Hz located at $(1000m,800m)$. Before applying the DRP scheme, we begin by generating a 'reference' solution using a spatially high-order standard finite difference scheme and time step well below the model's critical time-step. The scheme will be 2nd order in time.
End of explanation
# Define the wavefield with the size of the model and the time dimension
u = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=20)
# We can now write the PDE
pde = model.m * u.dt2 - H + model.damp * u.dt
# This discrete PDE can be solved in a time-marching way updating u(t+dt) from the previous time step
# Devito as a shortcut for u(t+dt) which is u.forward. We can then rewrite the PDE as
# a time marching updating equation known as a stencil using customized SymPy functions
from devito import solve
stencil = Eq(u.forward, solve(pde, u.forward).subs({H: u.laplace}))
# Finally we define the source injection and receiver read function to generate the corresponding code
src_term = src.inject(field=u.forward, expr=src * dt**2 / model.m)
Explanation: Now let us define our wavefield and PDE:
End of explanation
from devito import Operator
op = Operator([stencil] + src_term, subs=model.spacing_map)
#NBVAL_IGNORE_OUTPUT
op(time=time_range.num-1, dt=dt)
Explanation: Now, lets create the operator and execute the time marching scheme:
End of explanation
#import matplotlib
import matplotlib.pyplot as plt
from matplotlib import cm
Lx = 2000
Lz = 2000
abs_lay = nbl*h
dx = h
dz = dx
X, Z = np.mgrid[-abs_lay: Lx+abs_lay+1e-10: dx, -abs_lay: Lz+abs_lay+1e-10: dz]
levels = 100
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,u.data[0,:,:], levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
ax1.set_title('$u(x,z,500)$')
plt.gca().invert_yaxis()
plt.show()
Explanation: And plot the result:
End of explanation
from devito import SubDomain
# Define our 'upper' and 'lower' SubDomains:
class Upper(SubDomain):
name = 'upper'
def define(self, dimensions):
x, z = dimensions
# We want our upper layer to span the entire x-dimension and all
# but the bottom 80 (+boundary layer) cells in the z-direction, which is achieved via
# the following notation:
return {x: x, z: ('left', 80+nbl)}
class Lower(SubDomain):
name = 'lower'
def define(self, dimensions):
x, z = dimensions
# We want our lower layer to span the entire x-dimension and all
# but the top 121 (+boundary layer) cells in the z-direction.
return {x: x, z: ('right', 121+nbl)}
# Create these subdomains:
ur = Upper()
lr = Lower()
Explanation: We will now reimplement the above model applying the DRP scheme presented in [2].
First, since we wish to apply different custom FD coefficients in the upper on lower layers we need to define these two 'subdomains' using the Devito SubDomain functionality:
End of explanation
#NBVAL_IGNORE_OUTPUT
# Our scheme will now be 10th order (or less) in space.
order = 10
# Create our model passing it our 'upper' and 'lower' subdomains:
model = Model(vp=v, origin=origin, shape=shape, spacing=spacing,
space_order=order, nbl=nbl, subdomains=(ur,lr), bcs="damp")
Explanation: We now create our model incoporating these subdomains:
End of explanation
t0 = 0. # Simulation starts a t=0
tn = 500. # Simulation last 1 second (500 ms)
dt = 1.0 # Time step of 1.0ms
time_range = TimeAxis(start=t0, stop=tn, step=dt)
f0 = 0.025 # Source peak frequency is 25Hz (0.025 kHz)
src = RickerSource(name='src', grid=model.grid, f0=f0,
npoint=1, time_range=time_range)
src.coordinates.data[0, :] = np.array(model.domain_size) * .5
src.coordinates.data[0, -1] = 800. # Depth is 800m
# New wave-field
u_DRP = TimeFunction(name="u_DRP", grid=model.grid, time_order=2, space_order=order, coefficients='symbolic')
Explanation: And re-define model related objects. Note that now our wave-field will be defined with coefficients='symbolic'.
End of explanation
# The underlying pde is the same in both subdomains
pde_DRP = model.m * u_DRP.dt2 - H + model.damp * u_DRP.dt
# Define our custom FD coefficients:
x, z = model.grid.dimensions
# Upper layer
weights_u = np.array([ 2.00462e-03, -1.63274e-02, 7.72781e-02,
-3.15476e-01, 1.77768e+00, -3.05033e+00,
1.77768e+00, -3.15476e-01, 7.72781e-02,
-1.63274e-02, 2.00462e-03])
# Lower layer
weights_l = np.array([ 0. , 0. , 0.0274017,
-0.223818, 1.64875 , -2.90467,
1.64875 , -0.223818, 0.0274017,
0. , 0. ])
# Create the Devito Coefficient objects:
ux_u_coeffs = Coefficient(2, u_DRP, x, weights_u/x.spacing**2)
uz_u_coeffs = Coefficient(2, u_DRP, z, weights_u/z.spacing**2)
ux_l_coeffs = Coefficient(2, u_DRP, x, weights_l/x.spacing**2)
uz_l_coeffs = Coefficient(2, u_DRP, z, weights_l/z.spacing**2)
# And the replacement rules:
coeffs_u = Substitutions(ux_u_coeffs,uz_u_coeffs)
coeffs_l = Substitutions(ux_l_coeffs,uz_l_coeffs)
# Create a stencil for each subdomain:
stencil_u = Eq(u_DRP.forward, solve(pde_DRP, u_DRP.forward).subs({H: u_DRP.laplace}),
subdomain = model.grid.subdomains['upper'], coefficients=coeffs_u)
stencil_l = Eq(u_DRP.forward, solve(pde_DRP, u_DRP.forward).subs({H: u_DRP.laplace}),
subdomain = model.grid.subdomains['lower'], coefficients=coeffs_l)
# Source term:
src_term = src.inject(field=u_DRP.forward, expr=src * dt**2 / model.m)
# Create the operator, incoporating both upper and lower stencils:
op = Operator([stencil_u, stencil_l] + src_term, subs=model.spacing_map)
Explanation: We now create a stencil for each of our 'Upper' and 'Lower' subdomains defining different custom FD weights within each of these subdomains.
End of explanation
#NBVAL_IGNORE_OUTPUT
op(time=time_range.num-1, dt=dt)
Explanation: And now execute the operator:
End of explanation
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,u_DRP.data[0,:,:], levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
ax1.set_title('$u_{DRP}(x,z,500)$')
plt.gca().invert_yaxis()
plt.show()
Explanation: And plot the new results:
End of explanation
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(111)
cont = ax1.contourf(X,Z,abs(u_DRP.data[0,:,:]-u.data[0,:,:]), levels, cmap=cm.binary)
fig.colorbar(cont)
ax1.axis([0, Lx, 0, Lz])
ax1.set_xlabel('$x$')
ax1.set_ylabel('$z$')
plt.gca().invert_yaxis()
plt.show()
#NBVAL_IGNORE_OUTPUT
# Wavefield norm checks
assert np.isclose(np.linalg.norm(u.data[-1]), 139.108, atol=0, rtol=1e-4)
assert np.isclose(np.linalg.norm(u_DRP.data[-1]), 83.636, atol=0, rtol=1e-4)
Explanation: Finally, for comparison, lets plot the difference between the standard 20th order and optimized 10th order models:
End of explanation |
Subsets and Splits