repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
ES-DOC/esdoc-jupyterhub
notebooks/cmcc/cmip6/models/sandbox-1/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: SANDBOX-1\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-1', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Meteorological Forcings\n5. Key Properties --> Resolution\n6. Key Properties --> Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --> Absorption\n12. Optical Radiative Properties --> Mixtures\n13. Optical Radiative Properties --> Impact Of H2o\n14. Optical Radiative Properties --> Radiative Scheme\n15. Optical Radiative Properties --> Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE    Type: STRING    Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --> Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --> Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --> Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE    Type: BOOLEAN    Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --> Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE    Type: STRING    Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE    Type: STRING    Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE    Type: STRING    Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE    Type: ENUM    Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of aerosol species emitted and specified via an "other method"", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nCharacteristics of the "other method" used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --> Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE    Type: FLOAT    Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE    Type: FLOAT    Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE    Type: FLOAT    Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --> Mixtures\n**\n12.1. External\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --> Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --> Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --> Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/ukesm1-0-ll/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: MOHC\nSource ID: UKESM1-0-LL\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:15\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE    Type: STRING    Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE    Type: ENUM    Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE    Type: INTEGER    Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE    Type: BOOLEAN    Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE    Type: INTEGER    Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE    Type: ENUM    Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE    Type: STRING    Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE    Type: ENUM    Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE    Type: ENUM    Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE    Type: BOOLEAN    Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE    Type: STRING    Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
development/tutorials/gaussian_processes.ipynb
gpl-3.0
[ "Advanced: Gaussian Processes\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.4,<2.5\"\n\nimport phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger('error')", "Create fake \"observations\"\nFor the purposes of this tutorial, we'll a simplified version of the fake \"observations\" used in the Fitting 2 Paper Examples. We'll only use a Johnson:V light curve here and leave the PHOEBE parameters to their true values. We'll also use the spherical distortion method to speed up computations.\nFor a full analysis of the system, see the stack of examples accompanying the paper.", "b = phoebe.default_binary()\nb.set_value_all('distortion_method', 'sphere')\n\nb.add_dataset('lc', passband='Johnson:V', dataset='mylcV')\n\nb['sma@binary'] = 9.435\nb['requiv@primary'] = 1.473\nb['requiv@secondary'] = 0.937\nb['incl@binary'] = 87.35\nb['period@binary'] = 2.345678901\nb['q@binary'] = 0.888\nb['teff@primary'] = 6342.\nb['teff@secondary'] = 5684.\nb['t0@system'] = 1.23456789\nb['ecc@orbit'] = 0.148\nb['per0@orbit'] = 65.5\nb['vgamma@system'] = 185.5\n\nt = np.arange(1., 10.35, 29.44/1440)\n\nb['times@mylcV@dataset'] = t\nb.run_compute()", "Let's now add some correlated noise to the data, comprised of a Gaussian, exponential and quadratic term. This trend mimics instrumental noise and does not bear any astrophysical significance. It is still important we account for it, as it can affect the values of some astrophysical parameters we do care about (in this case most prominently the parameters most closely related to the depths of the eclipses: the ratio of temperatures and radii).", "np.random.seed(1)\nnoiseV = 0.006 * np.exp( 0.4 + (t-t[0])/(t[-1]-t[0]) ) + np.random.normal(0.0, 0.003, len(t)) - 0.002*(t-t[0])**2/(t[-1]-t[0])**2 + 0.001*(t-t[0])/(t[-1]-t[0]) + 0.0002\nnoiseV -= np.mean(noiseV)", "Let's also generate a noise model that resembles an astrophysical signal: for example, stellar pulsations, which we will represent with a sum of sine functions:", "freqs = [1.97, 1.72, 2.98] # oscillation frequencies\namps = [0.034, 0.019, 0.019] # amplitudes\ndeltas = [0.16, 0.34, 0.86] # phase shifts\n\nterms = [amps[i]*np.sin(2*np.pi*(freqs[i]*t)+deltas[i]) for i in [0,1,2]]\nnoiseV_puls = np.sum(terms, axis=0) + np.random.normal(0.0, 0.003, len(t))\n\nfluxes = b.get_value('fluxes', context='model')\nplt.plot(t, fluxes+noiseV, label='instrumental')\nplt.plot(t, fluxes+noiseV_puls, label='astrophysical')\nplt.legend()", "Now we have some fake data with two types of noise suitable for modeling with GPs. \nPHOEBE supports two different GP models: 'sklearn', which uses the GP implementation in scikit-learn, and celerite2. We have found that sklearn works better for instrumental noise, while celerite2 is designed with astrophysical noise in mind. \nLet's demonstrate how these two work on our case:\nInstrumental noise: the sklearn GPs backend\nLet's now add the fluxes with instrumental noise to our bundle and plot the residuals between the true model and the fake observations with added noise:", "b['fluxes@mylcV@dataset'] = fluxes + noiseV\nb['sigmas@mylcV@dataset'] = 0.003*np.ones_like(fluxes)\n\n_ = b.plot(y='residuals', show=True)", "We can add a gaussian process kernel by only providing the GPs backend we want to use, in this case 'sklearn':", "b.add_gaussian_process('sklearn')\nprint(b['gp_sklearn01'])", "The default sklearn kernel is 'white', which models, as the name suggests, white noise with a single 'noise_level' parameter. Let's see what the other options are:", "b['kernel@gp_sklearn01'].choices", "To see in more detail how each one of these works, refer to https://scikit-learn.org/stable/modules/gaussian_process.html#gp-kernels. For this trend, we have found that a sum of a DotProduct and RBF kernel works best (for how we arrived to this see the Fitting 2 Paper Automated GP Selection Example.\nLet's switch the kernel type for the current one to DotProduct and add a new RBF kernel. These will then be summed when running compute.", "b['kernel@gp_sklearn01'] = 'dot_product'\nb.add_gaussian_process('sklearn', kernel='rbf')\n\n# set the parameters of the kernels to ones that model the noise trend closely\nb.set_value('sigma_0', feature='gp_sklearn01', value=0.0198)\nb.set_value('length_scale', feature='gp_sklearn02', value=71.0)", "Sometimes, there may be some residuals in the eclipses due to the PHOEBE model not fitting the data well. GPs are very sensitive to these residuals and more often that not, will begin \"stealing\" signal from the PHOEBE model, rendering it useless. We can prevent this by masking out the points in the eclipse when running GPs on the residuals, with the parameter 'gp_exclude_phases':", "b['gp_exclude_phases@mylcV'] = [[-0.04,0.04], [-0.52,0.40]]", "Let's finally compute the model with GPs and plot the result:", "b.run_compute(model='model_gps')\n\nb.plot(s={'dataset': 0.005, 'model': 0.02}, ls={'model': '-'},\n marker={'dataset': '.'},\n legend=True)\nb.plot(s={'dataset': 0.005, 'model': 0.02}, ls={'model': '-'},\n marker={'dataset': '.'},\n y='residuals',\n legend=True, show=True)", "We can see that the GP model accounted for the correlated noise in our data, which in turn allows the PHOEBE model to fit the astrophysical signal more accurately.\nAstrophysical noise: the celerite2 GPs backend\nLet's now replace the observed fluxes with those with astrophysical noise and plot the residuals:", "b['fluxes@mylcV@dataset'] = fluxes + noiseV_puls\nb['sigmas@mylcV@dataset'] = 0.003*np.ones_like(fluxes)\n\n_ = b.plot(model='latest', y='residuals', show=True)", "For this type of noise, a gaussian process kernel with the 'celerite2' backend on average works better than the 'sklearn' ExpSineSquared periodic kernel. We'll add two SHO kernels corresponding to the three frequencies used to generate the noise and approximate the other parameters. For a full description of each kernel and its parameters see the celerite2 documentation.", "b.add_gaussian_process('celerite2', kernel='sho', sigma=0.2, rho=1/1.97, tau=3)\nb.add_gaussian_process('celerite2', kernel='sho', sigma=0.2, rho=1/1.72, tau=3)\nb.add_gaussian_process('celerite2', kernel='sho', sigma=0.2, rho=1/2.98, tau=3)\n\nb.run_compute()", "Uh-oh, we get an error! The issue is that we already have a sklearn kernel attached to our data and ss of yet, PHOEBE only supports one GPs \"backend\" at a time, either sklearn or celerite2. The two can't be mixed, however you can mix different kernels from the same module. So, let's disable the sklearn GP features before moving on to compute the celerite2 GPs.", "b.disable_feature('gp_sklearn01')\nb.disable_feature('gp_sklearn02')\n\nb.run_compute(model='model_gps', overwrite=True)\n\nb.plot(s={'dataset': 0.005, 'model': 0.02}, ls={'model': '-'},\n marker={'dataset': '.'},\n legend=True, show=True)", "Now we have both a PHOEBE and pulsations model that we can fit jointly! Keep in mind that the GP model is not based on the actual physics of pulsations and can still overfit or steal signal from the PHOEBE model. Therefore, great care needs to be taken in these cases to avoid that!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/jax
docs/jax-101/05-random-numbers.ipynb
apache-2.0
[ "Pseudo Random Numbers in JAX\n\nAuthors: Matteo Hessel & Rosalia Schneider\nIn this section we focus on pseudo random number generation (PRNG); that is, the process of algorithmically generating sequences of numbers whose properties approximate the properties of sequences of random numbers sampled from an appropriate distribution. \nPRNG-generated sequences are not truly random because they are actually determined by their initial value, which is typically referred to as the seed, and each step of random sampling is a deterministic function of some state that is carried over from a sample to the next.\nPseudo random number generation is an essential component of any machine learning or scientific computing framework. Generally, JAX strives to be compatible with NumPy, but pseudo random number generation is a notable exception.\nTo better understand the difference between the approaches taken by JAX and NumPy when it comes to random number generation we will discuss both approaches in this section.\nRandom numbers in NumPy\nPseudo random number generation is natively supported in NumPy by the numpy.random module.\nIn NumPy, pseudo random number generation is based on a global state.\nThis can be set to a deterministic initial condition using random.seed(SEED).", "import numpy as np\nnp.random.seed(0)", "You can inspect the content of the state using the following command.", "def print_truncated_random_state():\n \"\"\"To avoid spamming the outputs, print only part of the state.\"\"\"\n full_random_state = np.random.get_state()\n print(str(full_random_state)[:460], '...')\n\nprint_truncated_random_state()", "The state is updated by each call to a random function:", "np.random.seed(0)\n\nprint_truncated_random_state()\n\n_ = np.random.uniform()\n\nprint_truncated_random_state()", "NumPy allows you to sample both individual numbers, or entire vectors of numbers in a single function call. For instance, you may sample a vector of 3 scalars from a uniform distribution by doing:", "np.random.seed(0)\nprint(np.random.uniform(size=3))", "NumPy provides a sequential equivalent guarantee, meaning that sampling N numbers in a row individually or sampling a vector of N numbers results in the same pseudo-random sequences:", "np.random.seed(0)\nprint(\"individually:\", np.stack([np.random.uniform() for _ in range(3)]))\n\nnp.random.seed(0)\nprint(\"all at once: \", np.random.uniform(size=3))", "Random numbers in JAX\nJAX's random number generation differs from NumPy's in important ways. The reason is that NumPy's PRNG design makes it hard to simultaneously guarantee a number of desirable properties for JAX, specifically that code must be:\n\nreproducible,\nparallelizable,\nvectorisable.\n\nWe will discuss why in the following. First, we will focus on the implications of a PRNG design based on a global state. Consider the code:", "import numpy as np\n\nnp.random.seed(0)\n\ndef bar(): return np.random.uniform()\ndef baz(): return np.random.uniform()\n\ndef foo(): return bar() + 2 * baz()\n\nprint(foo())", "The function foo sums two scalars sampled from a uniform distribution.\nThe output of this code can only satisfy requirement #1 if we assume a specific order of execution for bar() and baz(), as native Python does.\nThis doesn't seem to be a major issue in NumPy, as it is already enforced by Python, but it becomes an issue in JAX. \nMaking this code reproducible in JAX would require enforcing this specific order of execution. This would violate requirement #2, as JAX should be able to parallelize bar and baz when jitting as these functions don't actually depend on each other.\nTo avoid this issue, JAX does not use a global state. Instead, random functions explicitly consume the state, which is referred to as a key .", "from jax import random\n\nkey = random.PRNGKey(42)\n\nprint(key)", "A key is just an array of shape (2,).\n'Random key' is essentially just another word for 'random seed'. However, instead of setting it once as in NumPy, any call of a random function in JAX requires a key to be specified. Random functions consume the key, but do not modify it. Feeding the same key to a random function will always result in the same sample being generated:", "print(random.normal(key))\nprint(random.normal(key))", "Note: Feeding the same key to different random functions can result in correlated outputs, which is generally undesirable. \nThe rule of thumb is: never reuse keys (unless you want identical outputs).\nIn order to generate different and independent samples, you must split() the key yourself whenever you want to call a random function:", "print(\"old key\", key)\nnew_key, subkey = random.split(key)\ndel key # The old key is discarded -- we must never use it again.\nnormal_sample = random.normal(subkey)\nprint(r\" \\---SPLIT --> new key \", new_key)\nprint(r\" \\--> new subkey\", subkey, \"--> normal\", normal_sample)\ndel subkey # The subkey is also discarded after use.\n\n# Note: you don't actually need to `del` keys -- that's just for emphasis.\n# Not reusing the same values is enough.\n\nkey = new_key # If we wanted to do this again, we would use new_key as the key.", "split() is a deterministic function that converts one key into several independent (in the pseudorandomness sense) keys. We keep one of the outputs as the new_key, and can safely use the unique extra key (called subkey) as input into a random function, and then discard it forever.\nIf you wanted to get another sample from the normal distribution, you would split key again, and so on. The crucial point is that you never use the same PRNGKey twice. Since split() takes a key as its argument, we must throw away that old key when we split it.\nIt doesn't matter which part of the output of split(key) we call key, and which we call subkey. They are all pseudorandom numbers with equal status. The reason we use the key/subkey convention is to keep track of how they're consumed down the road. Subkeys are destined for immediate consumption by random functions, while the key is retained to generate more randomness later.\nUsually, the above example would be written concisely as", "key, subkey = random.split(key)", "which discards the old key automatically.\nIt's worth noting that split() can create as many keys as you need, not just 2:", "key, *forty_two_subkeys = random.split(key, num=43)", "Another difference between NumPy's and JAX's random modules relates to the sequential equivalence guarantee mentioned above.\nAs in NumPy, JAX's random module also allows sampling of vectors of numbers.\nHowever, JAX does not provide a sequential equivalence guarantee, because doing so would interfere with the vectorization on SIMD hardware (requirement #3 above).\nIn the example below, sampling 3 values out of a normal distribution individually using three subkeys gives a different result to using giving a single key and specifying shape=(3,):", "key = random.PRNGKey(42)\nsubkeys = random.split(key, 3)\nsequence = np.stack([random.normal(subkey) for subkey in subkeys])\nprint(\"individually:\", sequence)\n\nkey = random.PRNGKey(42)\nprint(\"all at once: \", random.normal(key, shape=(3,)))", "Note that contrary to our recommendation above, we use key directly as an input to random.normal() in the second example. This is because we won't reuse it anywhere else, so we don't violate the single-use principle." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/ec-earth-consortium/cmip6/models/ec-earth3-veg/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: EC-EARTH-CONSORTIUM\nSource ID: EC-EARTH3-VEG\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:59\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-veg', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
iurilarosa/thesis
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
gpl-3.0
[ "import numpy\nfrom scipy import sparse", "Prove manipolazioni array", "unimatr = numpy.ones((10,10))\n#unimatr\nduimatr = unimatr*2\n#duimatr\n\nuniarray = numpy.ones((10,1))\n#uniarray\n\ntriarray = uniarray*3\n\nscalarray = numpy.arange(10)\nscalarray = scalarray.reshape(10,1)\n\n#NB fare il reshape da orizzontale a verticale è come se aggiungesse\n#una dimensione all'array facendolo diventare un ndarray\n#(prima era un array semplice, poi diventa un array (x,1), quindi puoi fare trasposto)\n#NB NUMPY NON FA TRASPOSTO DI ARRAY SEMPLICE!\n#scalarray\nscalarray.T\n\nramatricia = numpy.random.randint(2, size=36).reshape((6,6))\nramatricia2 = numpy.random.randint(2, size=36).reshape((6,6))\n\n#WARNING questa operazione moltiplica elemento per elemento\n#se l'oggetto è di dimensione inferiore moltiplica ogni riga/colonna\n# o matrice verticale/orizzontale a seconda della forma dell'oggetto\n\nduimatr*scalarray\n#duimatr*scalarray.T\n#duimatr*duimatr\nramatricia*ramatricia2\n\n#numpy dot invece fa prodotto matriciale righe per colonne\n\nnumpy.dot(duimatr,scalarray)\n#numpy.dot(duimatr,duimatr)\nnumpy.dot(ramatricia2,ramatricia)\n\nduimatr + scalarray", "Prove creazione matrice 3D con prodotti esterni", "scalarray = numpy.arange(10)\nuniarray = numpy.ones(10)\n\nmatricia = numpy.outer(scalarray, uniarray)\nmatricia\n\ntensorio = numpy.outer(matricia,scalarray).reshape(10,10,10)\ntensorio\n# metodo di creazione array nd (numpy.ndarray)", "Prove manipolazione matrici 3D numpy", "tensorio = numpy.ones(1000).reshape(10,10,10)\ntensorio\n# metodo di creazione array nd (numpy.ndarray)\n#altro metodo è con comando diretto\n#tensorio = numpy.ndarray((3,3,3), dtype = int, buffer=numpy.arange(30))\n#potrebbe essere utile con la matrice sparsa della peakmap, anche se difficilmente è maneggiabile come matrice densa\n#oppure\n\n# HO FINALMENTE SCOPERTO COME SI METTE IL DTYPE COME SI DEVE!! con \"numpy.float32\"!\n#tensorio = numpy.zeros((3,3,3), dtype = numpy.float32)\n#tensorio.dtype\n#tensorio\n\n\nscalarray = numpy.arange(10)\nuniarray = numpy.ones(10)\nscalamatricia = numpy.outer(scalarray,scalarray)\n#scalamatricia\n\n\ntensorio * 2\ntensorio + 2\ntensorio + scalamatricia\n%time tensorio + scalarray\n%time tensorio.__add__(scalarray)\n#danno stesso risultato con tempi paragonabili\n", "Prove matrici sparse", "from scipy import sparse\n\n\nramatricia = numpy.random.randint(2, size=25).reshape((5,5))\nramatricia\n\n#efficiente per colonne\n#sparsamatricia = sparse.csc_matrix(ramatricia)\n#print(sparsamatricia)\n\n#per righe\nsparsamatricia = sparse.csr_matrix(ramatricia)\nprint(sparsamatricia)\n\nsparsamatricia.toarray()\n\nrighe = numpy.array([0,0,0,1,2,3,3,4])\ncolonne = numpy.array([0,0,4,2,1,4,3,0])\nvalori = numpy.ones(righe.size)\nsparsamatricia = sparse.coo_matrix((valori, (righe,colonne)))\n\nprint(sparsamatricia)\n\nsparsamatricia.toarray()", "Prodotto di matrici\nProdotti interni\nConsidera di avere 2 matrici, a e b, in forma numpy array:\n\na*b fa il prodotto elemento per elemento (solo se a e b hanno stessa dimensione)\nnumpy.dot(a,b) fa il prodotto matriciale righe per colonne\n\nOra considera di avere 2 matrici, a e b, in forma di scipy.sparse:\n\na*b fa il prodotto matriciale righe per colonne\nnumpy.dot(a,b) non funziona per nulla\na.dot(b) fa il prodotto matriciale righe per colonne", "#vari modi per fare prodotti di matrici (con somma con operatore + è lo stesso)\ndensamatricia = sparsamatricia.toarray()\n\n#densa-densa\nprodottoPerElementiDD = densamatricia*densamatricia\nprodottoMatricialeDD = numpy.dot(densamatricia, densamatricia)\n\n#sparsa-densa\nprodottoMatricialeSD = sparsamatricia*densamatricia\nprodottoMatricialeSD2 = sparsamatricia.dot(densamatricia)\n\n#sparsa-sparsa\nprodottoMatricialeSS = sparsamatricia*sparsamatricia\nprodottoMatricialeSS2 = sparsamatricia.dot(sparsamatricia)\n\n# \"SPARSA\".dot(\"SPARSA O DENSA\") FA PRODOTTO MATRICIALE\n# \"SPARSA * SPARSA\" FA PRODOTTO MATRICIALE\n\n\nprodottoMatricialeDD - prodottoMatricialeSS\n#nb somme e sottrazioni tra matrici sparse e dense sono ok\n# prodotto matriciale tra densa e sparsa funziona come sparsa e sparsa", "Prodotti esterni", "densarray = numpy.array([\"a\",\"b\"],dtype = object)\ndensarray2 = numpy.array([\"c\",\"d\"],dtype = object)\n\nnumpy.outer(densarray,[1,2])\n\ndensamatricia = numpy.array([[1,2],[3,4]])\ndensamatricia2 = numpy.array([[\"a\",\"b\"],[\"c\",\"d\"]], dtype = object)\nnumpy.outer(densamatricia2,densamatricia).reshape(4,2,2)\n\ndensarray1 = numpy.array([0,2])\ndensarray2 = numpy.array([5,0])\ndensamatricia = numpy.array([[1,2],[3,4]])\ndensamatricia2 = numpy.array([[0,2],[5,0]])\n\nnrighe = 2\nncolonne = 2\nnpiani = 4\nprodottoEstDD = numpy.outer(densamatricia,densamatricia2).reshape(npiani,ncolonne,nrighe)\n#prodottoEstDD\n#prodottoEstDD = numpy.dstack((prodottoEstDD[0,:],prodottoEstDD[1,:]))\n\nprodottoEstDD\n\n\nsparsarray1 = sparse.csr_matrix(densarray1)\nsparsarray2 = sparse.csr_matrix(densarray2)\nsparsamatricia = sparse.csr_matrix(densamatricia)\nsparsamatricia2 = sparse.csr_matrix(densamatricia2)\n\nprodottoEstSS = sparse.kron(sparsamatricia,sparsamatricia2).toarray()\n\nprodottoEstSD = sparse.kron(sparsamatricia,densamatricia2).toarray()\nprodottoEstSD\n\n\n\n\n#prove prodotti esterni\n# numpy.outer\n# scipy.sparse.kron\n\n#densa-densa\nprodottoEsternoDD = numpy.outer(densamatricia,densamatricia)\n\n#sparsa-densa\nprodottoEsternoSD = sparse.kron(sparsamatricia,densamatricia)\n\n#sparsa-sparsa\nprodottoEsternoSS = sparse.kron(sparsamatricia,sparsamatricia)\n\n\nprodottoEsternoDD-prodottoEsternoSS\n\n# altre prove di prodotti esterni\nrarray1 = numpy.random.randint(2, size=4)\nrarray2 = numpy.random.randint(2, size=4)\nprint(rarray1,rarray2)\nramatricia = numpy.outer(rarray1,rarray2)\nunimatricia = numpy.ones((4,4)).astype(int)\n#ramatricia2 = rarray1 * rarray2.T\nprint(ramatricia,unimatricia)\n#print(ramatricia)\n#print(\"eppoi\")\n#print(ramatricia2)\n\n#sparsarray = sparse.csr_matrix(rarray1)\n#print(sparsarray)\n\n#ramatricia2 = \n\n#il mio caso problematico è che ho una matrice di cui so tutti gli elementi non zero,\n#so quante righe ho (i tempi), ma non so quante colonne di freq ho\nrandomcolonne = numpy.random.randint(10)+1\nramatricia = numpy.random.randint(2, size=10*randomcolonne).reshape((10,randomcolonne))\nprint(ramatricia.shape)\n#ramatricia\nnonzeri = numpy.nonzero(ramatricia)\nndati = len(nonzeri[0])\nndati\nramatricia\n\n#ora cerco di fare la matrice sparsa\nprint(ndati)\ndati = numpy.ones(2*ndati).reshape(ndati,2)\ndati\ncoordinateRighe = nonzeri[0]\ncoordinateColonne = nonzeri[1]\nsparsamatricia = sparse.coo_matrix((dati,(coordinateRighe,coordinateColonne)))\ndensamatricia = sparsamatricia.toarray()\ndensamatricia", "Provo a passare operazioni a array con array di coordinate", "matrice = numpy.arange(30).reshape(10,3)\nmatrice\n\nrighe = numpy.array([1,0,1,1])\ncolonne = numpy.array([2,0,2,2])\npesi = numpy.array([100,200,300,10])\nprint(righe,colonne)\n\nmatrice[righe,colonne]\n\n\nmatrice[righe,colonne] = (matrice[righe,colonne] + numpy.array([100,200,300,10]))\nmatrice\n\n%matplotlib inline\na = pyplot.imshow(matrice)\n\nnumpy.add.at(matrice, [righe,colonne],pesi)\nmatrice\n\n%matplotlib inline\na = pyplot.imshow(matrice)\n\nmatr", "Prove plots", "from matplotlib import pyplot\n%matplotlib inline\n\n\n##AL MOMENTO INUTILE, NON COMPILARE\nx = numpy.random.randint(10,size = 10)\ny = numpy.random.randint(10,size = 10)\npyplot.scatter(x,y, s = 5)\n#nb imshow si può fare solo con un 2d array\n\n#visualizzazione di una matrice, solo matrici dense a quanto pare\na = pyplot.imshow(densamatricia)\n#a = pyplot.imshow(sparsamatricia)\n#c = pyplot.matshow(densamatricia)\n\n\n#spy invece funziona anche per le sparse!\npyplot.spy(sparsamatricia,precision=0.01, marker = \".\", markersize=10)\n\n#in alternativa, scatterplot delle coordinate dal dataframe\nb = pyplot.scatter(coordinateColonne,coordinateRighe, s = 2)\n\nimport seaborn\n%matplotlib inline\n\n\nsbRegplot = seaborn.regplot(x=coordinateRighe, y=coordinateColonne, color=\"g\", fit_reg=False)\n\nimport pandas\n\ncoordinateRighe = coordinateRighe.reshape(len(coordinateRighe),1)\ncoordinateColonne = coordinateColonne.reshape(len(coordinateColonne),1)\n#print([coordinateRighe,coordinateColonne])\ncoordinate = numpy.concatenate((coordinateRighe,coordinateColonne),axis = 1)\ncoordinate\n\n\ntabella = pandas.DataFrame(coordinate)\ntabella.columns = [\"righe\", \"colonne\"]\n\n\nsbPlmplot = seaborn.lmplot(x = \"righe\", y = \"colonne\", data = tabella, fit_reg=False)\n\n", "Un esempio semplice del mio problema", "import numpy\nfrom scipy import sparse\nimport multiprocessing\nfrom matplotlib import pyplot\n\n#first i build a matrix of some x positions vs time datas in a sparse format\nmatrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10)\nx = numpy.nonzero(matrix)[0]\ntimes = numpy.nonzero(matrix)[1]\nweights = numpy.random.rand(x.size)\n\n\n\nimport scipy.io\n\nmint = numpy.amin(times)\nmaxt = numpy.amax(times)\n\nscipy.io.savemat('debugExamples/numpy.mat',{\n 'matrix':matrix, \n 'x':x, \n 'times':times, \n 'weights':weights,\n 'mint':mint,\n 'maxt':maxt,\n \n})\n\ntimes\n\n#then i define an array of y positions\nnStepsY = 5\ny = numpy.arange(1,nStepsY+1)\n\n# provo a iterare\n# VERSIONE CON HACK CON SPARSE verificato viene uguale a tutti gli altri metodi più semplici che ho provato\n# ma ha problemi con parallelizzazione\n\nnRows = nStepsY\nnColumns = 80\ny = numpy.arange(1,nStepsY+1)\nimage = numpy.zeros((nRows, nColumns))\ndef itermatrix(ithStep):\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n\n fakeRow = numpy.zeros(positions.size)\n matrix = sparse.coo_matrix((weights, (fakeRow, positions))).todense()\n matrix = numpy.ravel(matrix)\n missColumns = (nColumns-matrix.size)\n zeros = numpy.zeros(missColumns)\n matrix = numpy.concatenate((matrix, zeros))\n return matrix\n\n#for i in numpy.arange(nStepsY):\n# image[i] = itermatrix(i)\n\n#or\nimageSparsed = list(map(itermatrix, range(nStepsY)))\nimageSparsed = numpy.array(imageSparsed)\nscipy.io.savemat('debugExamples/numpyResult.mat', {'imageSparsed':imageSparsed}) \na = pyplot.imshow(imageSparsed, aspect = 10)\npyplot.show()\n\nimport numpy\nfrom scipy import sparse\nimport multiprocessing\nfrom matplotlib import pyplot\n\n#first i build a matrix of some x positions vs time datas in a sparse format\nmatrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10)\ntimes = numpy.nonzero(matrix)[0]\nfreqs = numpy.nonzero(matrix)[1]\nweights = numpy.random.rand(times.size)\n\n#then i define an array of y positions\nnStepsSpindowns = 5\nspindowns = numpy.arange(1,nStepsSpindowns+1)\n\n\n#PROVA CON BINCOUNT\n\ndef mapIt(ithStep):\n ncolumns = 80\n image = numpy.zeros(ncolumns)\n\n sdTimed = spindowns[ithStep]*times\n positions = (numpy.round(freqs-sdTimed)+50).astype(int)\n\n values = numpy.bincount(positions,weights)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n return image\n\n\n%time imageMapped = list(map(mapIt, range(nStepsSpindowns)))\nimageMapped = numpy.array(imageMapped)\n\n%matplotlib inline\na = pyplot.imshow(imageMapped, aspect = 10)\n\n# qui provo fully vectorial\ndef fullmatrix(nRows, nColumns):\n spindowns = numpy.arange(1,nStepsSpindowns+1)\n image = numpy.zeros((nRows, nColumns))\n\n sdTimed = numpy.outer(spindowns,times)\n freqs3d = numpy.outer(numpy.ones(nStepsSpindowns),freqs)\n weights3d = numpy.outer(numpy.ones(nStepsSpindowns),weights)\n spindowns3d = numpy.outer(spindowns,numpy.ones(times.size))\n positions = (numpy.round(freqs3d-sdTimed)+50).astype(int)\n\n matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(spindowns3d), numpy.ravel(positions)))).todense()\n return matrix\n\n%time image = fullmatrix(nStepsSpindowns, 80)\na = pyplot.imshow(image, aspect = 10)\npyplot.show()", "Confronti Debug!", "#confronto con codice ORIGINALE in matlab\nimmagineOrig = scipy.io.loadmat('debugExamples/dbOrigResult.mat')['binh_df0']\na = pyplot.imshow(immagineOrig[:,0:80], aspect = 10)\npyplot.show()\n\n#PROVA CON BINCOUNT\n\ndef mapIt(ithStep):\n ncolumns = 80\n image = numpy.zeros(ncolumns)\n\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n\n values = numpy.bincount(positions,weights)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n return image\n\n\n%time imageMapped = list(map(mapIt, range(nStepsY)))\nimageMapped = numpy.array(imageMapped)\n\n%matplotlib inline\na = pyplot.imshow(imageMapped, aspect = 10)\n\n# qui provo con vettorializzazione di numpy (apply along axis)\nnrows = nStepsY\nncolumns = 80\nmatrix = numpy.zeros(nrows*ncolumns).reshape(nrows,ncolumns)\n\ndef applyIt(image):\n ithStep = 1\n image = numpy.zeros(ncolumns)\n\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n #print(positions)\n values = numpy.bincount(positions,weights)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n \n return image\n\n\nimageApplied = numpy.apply_along_axis(applyIt,1,matrix)\na = pyplot.imshow(imageApplied, aspect = 10)\n\n# qui provo fully vectorial\ndef fullmatrix(nRows, nColumns):\n y = numpy.arange(1,nStepsY+1)\n image = numpy.zeros((nRows, nColumns))\n\n yTimed = numpy.outer(y,times)\n x3d = numpy.outer(numpy.ones(nStepsY),x)\n weights3d = numpy.outer(numpy.ones(nStepsY),weights)\n y3d = numpy.outer(y,numpy.ones(x.size))\n positions = (numpy.round(x3d-yTimed)+50).astype(int)\n\n matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(y3d), numpy.ravel(positions)))).todense()\n return matrix\n\n%time image = fullmatrix(nStepsY, 80)\na = pyplot.imshow(image, aspect = 10)\npyplot.show()\n\nimageMapped = list(map(itermatrix, range(nStepsY)))\nimageMapped = numpy.array(imageMapped)\na = pyplot.imshow(imageMapped, aspect = 10)\npyplot.show()\n\n# prova con numpy.put\n\nnStepsY = 5\n\ndef mapIt(ithStep):\n ncolumns = 80\n image = numpy.zeros(ncolumns)\n\n yTimed = y[ithStep]*times\n positions = (numpy.round(x-yTimed)+50).astype(int)\n\n values = numpy.bincount(positions,weights)\n values = values[numpy.nonzero(values)]\n positions = numpy.unique(positions)\n image[positions] = values\n return image\n\n\n%time imagePutted = list(map(mapIt, range(nStepsY)))\nimagePutted = numpy.array(imagePutted)\n\n%matplotlib inline\na = pyplot.imshow(image, aspect = 10)\npyplot.show()", "Documentazione\n\nRoba di array vari di numpy\n\n\nDomanda interessante su creazione matrici (stackoverflow)\nCreazione array ND\nOperatore add equivalente ad a+b per array ND\nData types\nProdotto tensore (da vedere ancora)\nGenerazione array ND random\nGenerazione array 1D random intero (eg binario)\nDà le coordinate di tutti gli elementi nonzero\nConcatenate: unisce due array in un solo array (mette il secondo dopo il primo nello stesso array, poi eventualmenete va reshapato se si vuole fare una matrice da più arrays)\nStack: unisce due array, forse migliore di concatenate, forse li aggiunge facendo una matrice\n\n\nRoba di matrici sparse\n\n\nCreazione sparse (nb vedi esempio finale per mio caso)\nCreazione sparsa random\nForma in cui fa prodotto esterno\n\n\nRoba scatterplot et similia\n\n\nScatterplot (nb attenti alle coordinate)\nPlot di matrici (imshow)\nTutorial per imshow\nSpy FA PLOT DI MATRICI SPARSE!\nPlots con seaborn: regplot) (più semplice, come pyplot vuole solo due array delle coordinate),lmplot (vuole dataframe),pairplot (non mi dovrebbe servire)\nEsempio scatterplot con lmplot (v anche siscomp)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ProfessorKazarinoff/staticsite
content/code/matplotlib_plots/plotting_trig_functions.ipynb
gpl-3.0
[ "Plotting is an essential skill for Engineers. Plots can reveal trends in data and outliers. Plots are a way to visually communicate results with your engineering team, supervisors and customers. In this post, we are going to plot a couple of trig functions using Python and matplotlib. Matplotlib is a plotting library that can produce line plots, bar graphs, histograms and many other types of plots using Python. Matplotlib is not included in the standard library. If you downloaded Python from python.org, you will need to install matplotlib and numpy with pip on the command line. \n```text\n\npip install matplotlib\npip install numpy\n```\n\nIf you are using the Anaconda distribution of Python (which is the distribution of Python I recommend for undergraduate engineers) matplotlib and numpy (plus a bunch of other libraries useful for engineers) are included. If you are using Anaconda, you do not need to install any additional packages to use matplotlib.\nIn this post, we are going to build a couple of plots which show the trig functions sine and cosine. We'll start by importing matplotlib and numpy using the standard lines import matplotlib.pyplot as plt and import numpy as np. This means we can use the short alias plt and np when we call these two libraries. You could import numpy as wonderburger and use wonderburger.sin() to call the numpy sine function, but this would look funny to other engineers. The line import numpy as np has become a common convention and will look familiar to other engineers using Python. In case you are working in a Juypiter notebook, the %matplotlib inline command is also necessary to view the plots directly in the notebook.", "import matplotlib.pyplot as plt\nimport numpy as np\n# if using a jupyter notebook\n%matplotlib inline ", "Next we will build a set of x values from zero to 4&pi; in increments of 0.1 radians to use in our plot. The x-values are stored in a numpy array. Numpy's arange() function has three arguments: start, stop, step. We start at zero, stop at 4&pi; and step by 0.1 radians. Then we define a variable y as the sine of x using numpy's sin() function.", "x = np.arange(0,4*np.pi,0.1) # start,stop,step\ny = np.sin(x)", "To create the plot, we use matplotlib's plt.plot() function. The two arguments are our numpy arrays x and y. The line plt.show() will show the finished plot.", "plt.plot(x,y)\nplt.show()", "Next let's build a plot which shows two trig functions, sine and cosine. We will create the same two numpy arrays x and y as before, and add a third numpy array z which is the cosine of x.", "x = np.arange(0,4*np.pi,0.1) # start,stop,step\ny = np.sin(x)\nz = np.cos(x)", "To plot both sine and cosine on the same set of axies, we need to include two pair of x,y values in our plt.plot() arguments. The first pair is x,y. This corresponds to the sine function. The second pair is x,z. This correspons to the cosine function. If you try and only add three arguments as in plt.plot(x,y,z), your plot will not show sine and cosine on the same set of axes.", "plt.plot(x,y,x,z)\nplt.show()", "Let's build one more plot, a plot which shows the sine and cosine of x and also includes axis labels, a title and a legend. We build the numpy arrays using the trig functions as before:", "x = np.arange(0,4*np.pi-1,0.1) # start,stop,step\ny = np.sin(x)\nz = np.cos(x)", "The plt.plot() call is the same as before using two pairs of x and y values. To add axis labels we will use the following methods:\n| matplotlib method | description | example |\n| ----------------- | ----------- | ------- |\n| plt.xlabel() | x-axis label | plt.xlabel('x values from 0 to 4pi') |\n| plt.ylabel() | y-axis label | plt.ylabel('sin(x) and cos(x)') |\n| plt.title() | plot title | plt.title('Plot of sin and cos from 0 to 4pi') |\n| plt.legend([ ]) | legend | plt.legend(['sin(x)', 'cos(x)']) |\nNote that plt.legend() method requires a list of strings (['string1', 'string2']), where the individual strings are enclosed with qutoes, then seperated by commas and finally inclosed in brackets to make a list. The first string in the list corresponds to the first x-y pair when we called plt.plot() , the second string in the list corresponds to the second x,y pair in the plt.plot() line.", "plt.plot(x,y,x,z)\nplt.xlabel('x values from 0 to 4pi') # string must be enclosed with quotes ' '\nplt.ylabel('sin(x) and cos(x)')\nplt.title('Plot of sin and cos from 0 to 4pi')\nplt.legend(['sin(x)', 'cos(x)']) # legend entries as seperate strings in a list\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yunfeiz/py_learnt
sample_code/date_utils.ipynb
apache-2.0
[ "import tushare as ts\nimport pandas as pd\nimport numpy as np\n\nfrom xpinyin import Pinyin\n\ndf=ts.get_stock_basics()\ndf.head(5)\natt=df.columns.values.tolist()\n#clommun_show = ['name', 'pe', 'outstanding', 'totals', 'totalAssets', 'liquidAssets', 'fixedAssets',\n#'esp', 'bvps', 'pb', 'perundp', 'rev', 'profit', 'gpr', 'npr', 'holders']\n\npin=Pinyin()\ndf['UP'] = None\nfor index, row in df.iterrows():\n name_str = df.name[index]\n #print(name_str)\n up_letter = pin.get_initials(name_str,u'')\n #print(up_letter)\n df.at[index,['UP']]=up_letter\n#df[df['UP']=='HTGD']\ndf['code']=df.index\n#print(df.UP)", "code,代码\nname,名称\nindustry,所属行业\narea,地区\npe,市盈率\noutstanding,流通股本(亿)\ntotals,总股本(亿)\ntotalAssets,总资产(万)\nliquidAssets,流动资产\nfixedAssets,固定资产\nreserved,公积金\nreservedPerShare,每股公积金\nesp,每股收益\nbvps,每股净资\npb,市净率\ntimeToMarket,上市日期\nundp,未分利润\nperundp, 每股未分配\nrev,收入同比(%)\nprofit,利润同比(%)\ngpr,毛利率(%)\nnpr,净利润率(%)\nholders,股东人数\n['name', 'pe', 'outstanding', 'totals', 'totalAssets', 'liquidAssets', 'fixedAssets', 'esp', 'bvps', 'pb', 'perundp', 'rev', 'profit', 'gpr', 'npr', 'holders']", "col_show = ['name', 'open', 'pre_close', 'price', 'high', 'low', 'volume', 'amount', 'time', 'code']\ninitial_letter = ['HTGD','OFKJ','CDKJ','ZJXC','GXKJ','FHTX','DZJG']\ncode =[]\nfor letter in initial_letter:\n code.append(df[df['UP']==letter].code[0])\n #print(code)\nif code != '': #not empty != ''\n df_price = ts.get_realtime_quotes(code)\n #print(df_price)\n #df_price.columns.values.tolist()\ndf_price[col_show]", "TO-DO\nAdd the map from initial to code\nbuild up a dataframe with fundamental and indicotors\nFor Leadings, need cache more data for the begining data", "from matplotlib.mlab import csv2rec\n\ndf=ts.get_k_data(\"002456\",start='2018-01-05',end='2018-01-09')\ndf.to_csv(\"temp.csv\")\nr=csv2rec(\"temp.csv\")\n#r.date\n\nimport time, datetime\n\n#str = df[df.code == '600487'][clommun_show].name.values\n#print(str)\ntoday=datetime.date.today()\nyesterday = today - datetime.timedelta(1)\n#print(today, yesterday)\ni = datetime.datetime.now()\nprint (\"当前的日期和时间是 %s\" % i)\nprint (\"ISO格式的日期和时间是 %s\" % i.isoformat() )\nprint (\"当前的年份是 %s\" %i.year)\nprint (\"当前的月份是 %s\" %i.month)\nprint (\"当前的日期是 %s\" %i.day)\nprint (\"dd/mm/yyyy 格式是 %s/%s/%s\" % (i.day, i.month, i.year) )\nprint (\"当前小时是 %s\" %i.hour)\nprint (\"当前分钟是 %s\" %i.minute)\nprint (\"当前秒是 %s\" %i.second)\n\nimport time\n \nlocaltime = time.localtime(time.time())\nprint(\"本地时间为 :\", localtime)\n\n# 格式化成2016-03-20 11:45:39形式\nprint(time.strftime(\"%Y-%m-%d %H:%M:%S\", time.localtime()))\n \n# 格式化成Sat Mar 28 22:24:24 2016形式\nprint(time.strftime(\"%a %b %d %H:%M:%S %Y\", time.localtime()))\n\n#!/usr/bin/python\n# -*- coding: UTF-8 -*-\n \nimport calendar \ncal = calendar.month(2019, 3)\n#print (cal)" ]
[ "code", "markdown", "code", "markdown", "code" ]
xiaoxiaoyao/MyApp
jupyter_notebook/datascience.ipynb
unlicense
[ "如何用Python从海量文本抽取主题?\n你在工作、学习中是否曾因信息过载叫苦不迭?有一种方法能够替你读海量文章,并将不同的主题和对应的关键词抽取出来,让你谈笑间观其大略。本文使用Python对超过1000条文本做主题抽取,一步步带你体会非监督机器学习LDA方法的魅力。想不想试试呢?\n每个现代人,几乎都体会过信息过载的痛苦。文章读不过来,音乐听不过来,视频看不过来。可是现实的压力,使你又不能轻易放弃掉。\n准备\npip install jieba\npip install pyldavis\npip install pandas,sklearn\n为了处理表格数据,我们依然使用数据框工具Pandas。先调用它,然后读入我们的数据文件datascience.csv.", "import pandas as pd\ndf = pd.read_csv(\"datascience.csv\", encoding='gb18030') #注意它的编码是中文GB18030,不是Pandas默认设置的编码,所以此处需要显式指定编码类型,以免出现乱码错误。\n# 之后看看数据框的头几行,以确认读取是否正确。\ndf.head()\n\n#我们看看数据框的长度,以确认数据是否读取完整。\ndf.shape", "(1024, 3)\n行列数都与我们爬取到的数量一致,通过。\n分词\n下面我们需要做一件重要工作——分词\n我们首先调用jieba分词包。\n我们此次需要处理的,不是单一文本数据,而是1000多条文本数据,因此我们需要把这项工作并行化。这就需要首先编写一个函数,处理单一文本的分词。\n有了这个函数之后,我们就可以不断调用它来批量处理数据框里面的全部文本(正文)信息了。你当然可以自己写个循环来做这项工作。但这里我们使用更为高效的apply函数。如果你对这个函数有兴趣,可以点击这段教学视频查看具体的介绍。\n下面这一段代码执行起来,可能需要一小段时间。请耐心等候。", "import jieba\ndef chinese_word_cut(mytext):\n return \" \".join(jieba.cut(mytext))\ndf[\"content_cutted\"] = df.content.apply(chinese_word_cut)\n\n#执行完毕之后,我们需要查看一下,文本是否已经被正确分词。\ndf.content_cutted.head()\n\n#文本向量化\nfrom sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer\nn_features = 1000\ntf_vectorizer = CountVectorizer(strip_accents = 'unicode',\n max_features=n_features,\n stop_words='english',\n max_df = 0.5,\n min_df = 10)\ntf = tf_vectorizer.fit_transform(df.content_cutted)", "我们需要人为设定主题的数量。这个要求让很多人大跌眼镜——我怎么知道这一堆文章里面多少主题?!\n别着急。应用LDA方法,指定(或者叫瞎猜)主题个数是必须的。如果你只需要把文章粗略划分成几个大类,就可以把数字设定小一些;相反,如果你希望能够识别出非常细分的主题,就增大主题个数。\n对划分的结果,如果你觉得不够满意,可以通过继续迭代,调整主题数量来优化。\n这里我们先设定为5个分类试试。", "#应用LDA方法\nfrom sklearn.decomposition import LatentDirichletAllocation\nn_topics = 5\nlda = LatentDirichletAllocation(n_topics=n_topics, max_iter=50,\n learning_method='online',\n learning_offset=50.,\n random_state=0)\n\n#这一部分工作量较大,程序会执行一段时间,Jupyter Notebook在执行中可能暂时没有响应。等待一会儿就好,不要着急。\nlda.fit(tf)\n\n#主题没有一个确定的名称,而是用一系列关键词刻画的。我们定义以下的函数,把每个主题里面的前若干个关键词显示出来:\ndef print_top_words(model, feature_names, n_top_words):\n for topic_idx, topic in enumerate(model.components_):\n print(\"Topic #%d:\" % topic_idx)\n print(\" \".join([feature_names[i]\n for i in topic.argsort()[:-n_top_words - 1:-1]]))\n print()\n\n#定义好函数之后,我们暂定每个主题输出前20个关键词。\nn_top_words = 20\n\n#以下命令会帮助我们依次输出每个主题的关键词表:\ntf_feature_names = tf_vectorizer.get_feature_names()\nprint_top_words(lda, tf_feature_names, n_top_words)", "到这里,LDA已经成功帮我们完成了主题抽取。但是我知道你不是很满意,因为结果不够直观。\n那咱们就让它直观一些好了。\n执行以下命令,会有有趣的事情发生。", "import pyLDAvis\nimport pyLDAvis.sklearn\npyLDAvis.enable_notebook()\npyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)", "祝探索旅程愉快!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mitliagkas/graphs
randomwalks/WDC Random Walk.ipynb
mit
[ "import networkx as nx\nimport math\nimport numpy as np\n\nfrom IPython.display import Javascript", "Load data", "lastnode = 5000\n\ndatafile = open('/var/datasets/wdc/small-pld-arc')\n\nG = nx.DiGraph()\n\nfor line in datafile:\n ijstr = line.split('\\t')\n \n i=int(ijstr[0])\n j=int(ijstr[1])\n \n if i>lastnode:\n break\n if j>lastnode:\n continue\n G.add_edge(i,j)\n \ndatafile.close()\nGorig = G.copy()\n\nindexfile = open('/var/datasets/wdc/small-pld-index')\nindex = {}\n\nfor line in indexfile:\n namei = line.split('\\t')\n \n name=namei[0]\n i=int(namei[1])\n \n if i>lastnode:\n break\n\n index[i]=name\n \nindexfile.close()\n\ndef cleanupgraph(G):\n comp = nx.weakly_connected_components(G.copy())\n for c in comp:\n if len(c)<4:\n G.remove_nodes_from(c)\n\ndef graphcleanup(G):\n for (node, deg) in G.degree_iter():\n if deg==0:\n G.remove_node(node)\n elif deg==1:\n if G.degree((G.predecessors(node) + G.successors(node))[0]) == 1:\n G.remove_node(node)\n elif deg==2 and G.in_degree(node)==1:\n if (G.predecessors(node) == G.successors(node)) and G.degree((G.predecessors(node) + G.successors(node))[0]) == 2:\n G.remove_node(node)\n\ncleanupgraph(G)\n\nG.size()\n\nGorig.number_of_nodes()\n\nGorig.size()", "Convert to Javascript for interactivity\nAdapted from:\nhttp://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter06_viz/04_d3.ipynb\nFrom:\nhttp://networkx.github.io/documentation/latest/examples/javascript/force.html", "#from IPython.core.display import display_javascript\nimport json\nfrom networkx.readwrite import json_graph\n\nd = json_graph.node_link_data(G)\nfor node in d['nodes']:\n node['name']=node['id']\n node['value']=G.degree(node['id'])\n if True:\n node['group'] = node['id'] % 4\n else:\n if node['id']<10:\n node['group']=0#node['id'] % 4\n else:\n node['group']=1#node['id'] % 4\n \nd['adjacency'] = json_graph.adjacency_data(G)['adjacency']\njson.dump(d, open('rwgraph.json','w'))\n\n%%html\n<div id=\"d3-example\"></div>\n<style>\n.node {stroke: #fff; stroke-width: 1.5px;}\n.link {stroke: #999; stroke-opacity: .3;}\n</style>\n<script src=\"randomwalk.js\"></script>", "Uses:\nhttps://github.com/mbostock/d3/wiki/Force-Layout\nhttp://bl.ocks.org/mbostock/4062045", "Javascript(filename='force.js')\n\nL = nx.linalg.laplacianmatrix.directed_laplacian_matrix(G)\nLinv = np.linalg.inv(L)\n\nL.shape\n\nn = L.shape[0]\nReff = np.zeros((n,n))\n\nGsparse = G.copy()\n\ngraphcleanup(Gsparse)\n\nnodelookup={Gsparse.nodes()[idx]:idx for idx in range(len(Gsparse.nodes()))}\n\nedge = np.zeros((n,1))\nfor (i,j) in Gsparse.edges_iter():\n edge[nodelookup[i]] = 1\n edge[nodelookup[j]] = -1\n Reff[nodelookup[i],nodelookup[j]] = edge.T.dot(Linv.dot(edge))\n edge[[nodelookup[i]]] = 0\n edge[[nodelookup[j]]] = 0\n\nReffAbs=np.abs(Reff)+np.abs(Reff.T)", "If you call\narr.argsort()[:3]\nIt will give you the indices of the 3 smallest elements.\narray([0, 2, 1], dtype=int64)\nSo, for n, you should call\narr.argsort()[:n]", "res = ReffAbs.reshape(n**2)\nargp = np.argpartition(res,n**2-n)\n\nmask = (ReffAbs < res[argp[-int(0.5*Gsparse.number_of_nodes())]]) & (ReffAbs >0)\nfor (i,j) in Gsparse.edges():\n if mask[nodelookup[i],nodelookup[j]]:\n Gsparse.remove_edge(i,j)\n\ncleanupgraph(Gsparse)\n\nd = json_graph.node_link_data(Gsparse)\nfor node in d['nodes']:\n node['name']=index[node['id']]\n node['value']=Gsparse.degree(node['id'])\n node['group']=index[node['id']][-3:]\n\njson.dump(d, open('graph.json','w'))\n\nGorig.number_of_edges()\n\nGsparse.number_of_edges()\n\nGsparse.number_of_nodes()\n\nGsparseAdj = nx.linalg.adjacency_matrix(Gorig).toarray()\nGsparseAdj = nx.to_numpy_matrix(Gorig)\nGsparseAdj[ReffAbs < res[argp[-300]]] = 0\nGsparse = nx.from_numpy_matrix?\nGsparse = nx.from_numpy_matrix\nGsparse = nx.from_numpy_matrix(GsparseAdj, create_using=nx.DiGraph())\n\nGsparse = nx.from_numpy_matrix\n\nedge = np.zeros((n,1))\nfor i in range(n):\n if i % int(math.ceil((float(10)/100)*n)) == 0: \n print int(math.floor(100*float(i)/n)), '%'\n \n edge[i] = 1\n\n for j in range(i+1, n):\n edge[j] = -1\n Reff[i,j] = edge.T.dot(Linv.dot(edge))\n edge[j] = 0\n \n edge[i] = 0" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kmunve/APS
aps/notebooks/ml_varsom/preprocessing.ipynb
mit
[ "Pre-processing of avalanche warning data for machine learning", "import sys\nimport pandas as pd # check out Modin https://towardsdatascience.com/get-faster-pandas-with-modin-even-on-your-laptops-b527a2eeda74\nimport numpy as np\nimport json\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom pathlib import Path\nimport datetime\n\n# Add path to APS modules\naps_pth = Path('.').absolute()\nprint(aps_pth)\nif aps_pth not in sys.path:\n sys.path.append(aps_pth)\nsns.set(style=\"white\")\n#from sklearn.preprocessing import LabelEncoder\n#from pprint import pprint\n\n#pd.set_option(\"display.max_rows\",6)\n\n%matplotlib inline\n\n# analysis of entire data set - collected using varsomdata2.varsomscripts.avalanchewarningscomplete.get_season_17_18()\n#data_pth = Path(r'.\\aps\\data\\varsom\\norwegian_avalanche_warnings_season_17_18.csv')\ndata_pth = Path(r'D:\\Dev\\APS\\aps\\data\\varsom\\norwegian_avalanche_warnings_season_16_19.csv')\n\n#varsom_df = pd.read_csv(aps_pth / data_pth, index_col=0)\nvarsom_df = pd.read_csv(data_pth, index_col=0)\nvarsom_df.head()\n\nvarsom_df.columns.values\n\nvarsom_df[varsom_df['region_id']==3012].filter(['avalanche_problem_1_cause_id', 'avalanche_problem_1_cause_name',\n 'avalanche_problem_1_destructive_size_ext_id',\n 'avalanche_problem_1_destructive_size_ext_name',\n 'avalanche_problem_1_distribution_id',\n 'avalanche_problem_1_distribution_name',\n 'avalanche_problem_1_exposed_height_1',\n 'avalanche_problem_1_exposed_height_2',\n 'avalanche_problem_1_exposed_height_fill',\n 'avalanche_problem_1_ext_id', 'avalanche_problem_1_ext_name',\n 'avalanche_problem_1_probability_id',\n 'avalanche_problem_1_probability_name',\n 'avalanche_problem_1_problem_id',\n 'avalanche_problem_1_problem_type_id',\n 'avalanche_problem_1_problem_type_name',\n 'avalanche_problem_1_trigger_simple_id',\n 'avalanche_problem_1_trigger_simple_name',]).head(10)", "Check if there are missing values.", "# for col in varsom_df.columns.values:\n# print(f'{col}: {varsom_df[col].unique()} \\n')\n\n# Find the amount of NaN values in each column\nprint(varsom_df.isnull().sum().sort_values(ascending=False))", "Fill missing values where necessary.", "varsom_df['mountain_weather_wind_speed'] = varsom_df['mountain_weather_wind_speed'].fillna('None')\nvarsom_df['mountain_weather_wind_direction'] = varsom_df['mountain_weather_wind_direction'].fillna('None')\nprint(varsom_df.isnull().sum().sort_values(ascending=False))", "Feature engineering\nRe-label og -classifiy variables where necessary.\nAdd an avalanche problem severity index - based on its attributes size, distribution and sensitivity.\nWhen using shift or filling values using mean or similar, make sure to first sort individual regions and seasons by date.", "varsom_df['date'] = pd.to_datetime(varsom_df['date_valid'], infer_datetime_format=True)\n\ndef add_prevday_features(df):\n ### danger level\n df['danger_level_prev1day'] = df['danger_level'].shift(1)\n df['danger_level_name_prev1day'] = df['danger_level_name'].shift(1)\n df['danger_level_prev2day'] = df['danger_level'].shift(2)\n df['danger_level_name_prev2day'] = df['danger_level_name'].shift(2)\n df['danger_level_prev3day'] = df['danger_level'].shift(3)\n df['danger_level_name_prev3day'] = df['danger_level_name'].shift(3)\n\n ### avalanche problem\n df['avalanche_problem_1_cause_id_prev1day'] = df['avalanche_problem_1_cause_id'].shift(1)\n df['avalanche_problem_1_problem_type_id_prev1day'] = df['avalanche_problem_1_problem_type_id'].shift(1)\n df['avalanche_problem_1_cause_id_prev2day'] = df['avalanche_problem_1_cause_id'].shift(2)\n df['avalanche_problem_1_problem_type_id_prev2day'] = df['avalanche_problem_1_problem_type_id'].shift(2)\n df['avalanche_problem_1_cause_id_prev3day'] = df['avalanche_problem_1_cause_id'].shift(3)\n df['avalanche_problem_1_problem_type_id_prev3day'] = df['avalanche_problem_1_problem_type_id'].shift(3)\n\n df['avalanche_problem_2_cause_id_prev1day'] = df['avalanche_problem_2_cause_id'].shift(1)\n df['avalanche_problem_2_problem_type_id_prev1day'] = df['avalanche_problem_2_problem_type_id'].shift(1)\n df['avalanche_problem_2_cause_id_prev2day'] = df['avalanche_problem_2_cause_id'].shift(2)\n df['avalanche_problem_2_problem_type_id_prev2day'] = df['avalanche_problem_2_problem_type_id'].shift(2)\n df['avalanche_problem_2_cause_id_prev3day'] = df['avalanche_problem_2_cause_id'].shift(3)\n df['avalanche_problem_2_problem_type_id_prev3day'] = df['avalanche_problem_2_problem_type_id'].shift(3)\n\n ### weather\n df['mountain_weather_temperature_max_prev1day'] = df['mountain_weather_temperature_max'].shift(1)\n df['mountain_weather_temperature_max_prev2day'] = df['mountain_weather_temperature_max'].shift(2)\n df['mountain_weather_temperature_max_prev3day'] = df['mountain_weather_temperature_max'].shift(3)\n\n df['mountain_weather_temperature_min_prev1day'] = df['mountain_weather_temperature_min'].shift(1)\n df['mountain_weather_temperature_min_prev2day'] = df['mountain_weather_temperature_min'].shift(2)\n df['mountain_weather_temperature_min_prev3day'] = df['mountain_weather_temperature_min'].shift(3)\n\n df['mountain_weather_precip_region_prev1day'] = df['mountain_weather_precip_region'].shift(1)\n df['mountain_weather_precip_most_exposed_prev1day'] = df['mountain_weather_precip_most_exposed'].shift(1)\n df['mountain_weather_precip_region_prev3daysum'] = df['mountain_weather_precip_region'].shift(1) + df['mountain_weather_precip_region'].shift(2) + df['mountain_weather_precip_region'].shift(3)\n\n return df\n\nvarsom_df[(varsom_df['date']>=datetime.date(year=2016, month=12, day=1)) & (varsom_df['date']<datetime.date(year=2017, month=6, day=1))]\n\n# grouping by region and season\ngrouped_df = pd.DataFrame()\n\nfor id in varsom_df['region_id'].unique():\n#for id in [3003, 3011, 3014, 3028]:\n _tmp_df = varsom_df[varsom_df['region_id']==id].copy()\n _tmp_df.sort_values(by='valid_from')\n \n start, stop = int(_tmp_df['date_valid'].min()[:4]), int(_tmp_df['date_valid'].max()[:4])\n for yr in range(start, stop-1):\n _tmp_df[(_tmp_df['date']>=datetime.date(year=yr, month=12, day=1)) & (_tmp_df['date']<datetime.date(year=yr+1, month=6, day=1))]\n _tmp_df = add_prevday_features(_tmp_df)\n #print(len(_tmp_df), _tmp_df['region_id'].unique())\n if grouped_df.empty:\n print('empty')\n grouped_df = _tmp_df.copy()\n else:\n grouped_df = pd.concat([grouped_df, _tmp_df], ignore_index=True).copy()\n \n #print('g', len(grouped_df), grouped_df['region_id'].unique())\n \n\ngrouped_df.filter(['valid_from', 'region_name', 'region_id', 'avalanche_problem_1_problem_type_id', 'avalanche_problem_1_problem_type_id_prev2day'])\n\n\nvarsom_df = grouped_df.copy()\n\n#from aps.notebooks.ml_varsom.regroup_forecast import regroup\nfrom regroup_forecast import regroup\nvarsom_df = regroup(varsom_df)", "Add historical values, e.g. yesterdays precipitation\nAdd a tag to the feature name to indicate if it is categorical (c) or numerical (n).\nAdd a target tag (t).\nAdd a modelled (m) or observed (o) tag.\n_prev1day\n_prev3day\nn_f_Next24HourChangeInTempFromPrev3DayMax - change of temperature over a certain period.\nn_r_Prev7dayMinTemp2InPast - ???\nn_r_SNOWDAS_SnowpackAveTemp_k2InPast - modelled average temperature from model SNOWDAS (? https://nsidc.org/data/g02158)", "# Check if sensitivity transformation worked...\nprint(varsom_df['avalanche_problem_1_sensitivity_id_class'].value_counts())\n\nvarsom_df.filter(['mountain_weather_precip_region', 'mountain_weather_precip_region_prev3daysum']).head(12)\n\nvarsom_df[varsom_df['region_id']==3012].filter(['region_id', 'danger_level', 'danger_level_prev1day']).head(40)", "Combine avalanche problem attributes into single parameter", "def get_aval_problem_combined(type_, dist_, sens_, size_):\n return int(\"{0}{1}{2}{3}\".format(type_, dist_, sens_, size_))\n\n\ndef print_aval_problem_combined(aval_combined_int):\n aval_combined_str = str(aval_combined_int)\n #with open(aps_pth / r'aps/config/snoskred_keys.json') as jdata:\n with open(r'D:\\Dev\\APS\\aps\\config\\snoskred_keys.json') as jdata:\n snoskred_keys = json.load(jdata)\n type_ = snoskred_keys[\"Class_AvalancheProblemTypeName\"][aval_combined_str[0]]\n dist_ = snoskred_keys[\"Class_AvalDistributionName\"][aval_combined_str[1]]\n sens_ = snoskred_keys[\"Class_AvalSensitivityId\"][aval_combined_str[2]]\n size_ = snoskred_keys[\"DestructiveSizeId\"][aval_combined_str[3]]\n \n return f\"{type_}:{dist_}:{sens_}:{size_}\"\n\nprint(print_aval_problem_combined(6221))\n \n \n \nvarsom_df['aval_problem_1_combined'] = varsom_df.apply(lambda row: get_aval_problem_combined(row['avalanche_problem_1_problem_type_id_class'],\n row['avalanche_problem_1_distribution_id'],\n row['avalanche_problem_1_sensitivity_id_class'], #avalanche_problem_1_trigger_simple_id_class / avalanche_problem_1_sensitivity_id_class\n row['avalanche_problem_1_destructive_size_ext_id']), axis=1)\n\naval_uni = varsom_df['aval_problem_1_combined'].unique()\nprint(aval_uni, len(aval_uni))\nprint(varsom_df['aval_problem_1_combined'].value_counts())\nprint(varsom_df['avalanche_problem_1_problem_type_id_class'].value_counts())", "Hot encode categorical variables where necessary.", "# hot encode\nhot_encode_ = ['emergency_warning', 'author', 'mountain_weather_wind_direction']\nvarsom_df = pd.get_dummies(varsom_df, columns=hot_encode_)", "Check if there are no weired or missing values.", "# Check if there are no weired or missing values.\nfor col in varsom_df.columns.values:\n print(f'{col}: {varsom_df[col].unique()} \\n')", "Remove variables we know we do not need. In this case mainly because they are redundant like the avalanche_problem_1_ext_name and avalanche_problem_1_ext_id - in this case we only keep the numeric id variable.", "del_list = [\n 'utm_zone',\n 'utm_east',\n 'utm_north',\n 'danger_level_name',\n 'avalanche_problem_1_exposed_height_fill',\n 'avalanche_problem_2_exposed_height_fill',\n 'avalanche_problem_3_exposed_height_fill',\n 'avalanche_problem_1_valid_expositions',\n 'avalanche_problem_2_valid_expositions',\n 'avalanche_problem_3_valid_expositions',\n 'avalanche_problem_1_cause_name',\n 'avalanche_problem_1_problem_type_name',\n 'avalanche_problem_1_destructive_size_ext_name',\n 'avalanche_problem_1_distribution_name',\n 'avalanche_problem_1_ext_name',\n 'avalanche_problem_1_probability_name',\n 'avalanche_problem_1_trigger_simple_name',\n 'avalanche_problem_1_type_name',\n 'avalanche_problem_2_cause_name',\n 'avalanche_problem_2_problem_type_name',\n 'avalanche_problem_2_destructive_size_ext_name',\n 'avalanche_problem_2_distribution_name',\n 'avalanche_problem_2_ext_name',\n 'avalanche_problem_2_probability_name',\n 'avalanche_problem_2_trigger_simple_name',\n 'avalanche_problem_2_type_name',\n 'avalanche_problem_3_cause_name',\n 'avalanche_problem_3_problem_type_name',\n 'avalanche_problem_3_destructive_size_ext_name',\n 'avalanche_problem_3_distribution_name',\n 'avalanche_problem_3_ext_name',\n 'avalanche_problem_3_probability_name',\n 'avalanche_problem_3_trigger_simple_name',\n 'avalanche_problem_3_type_name',\n 'latest_avalanche_activity',\n 'main_text',\n 'snow_surface',\n 'current_weak_layers',\n 'avalanche_danger',\n 'avalanche_problem_1_advice',\n 'avalanche_problem_2_advice',\n 'avalanche_problem_3_advice',\n 'mountain_weather_wind_speed',\n 'region_type_name',\n 'region_name',\n 'reg_id',\n 'valid_from',\n 'valid_to'\n]\nremoved_ = [varsom_df.pop(v) for v in del_list]\nremoved_", "Fill missing values where necessary", "fill_list = [\n 'mountain_weather_freezing_level',\n 'mountain_weather_precip_region',\n 'mountain_weather_precip_region_prev1day',\n 'mountain_weather_precip_region_prev3daysum',\n 'mountain_weather_precip_most_exposed',\n 'mountain_weather_precip_most_exposed_prev1day',\n 'mountain_weather_temperature_min',\n 'mountain_weather_temperature_max',\n 'mountain_weather_temperature_elevation',\n 'danger_level_prev3day',\n 'avalanche_problem_1_problem_type_id_prev3day',\n 'avalanche_problem_2_problem_type_id_prev3day',\n 'avalanche_problem_2_cause_id_prev3day',\n 'avalanche_problem_1_cause_id_prev3day',\n 'danger_level_prev2day',\n 'avalanche_problem_1_cause_id_prev2day',\n 'avalanche_problem_1_problem_type_id_prev2day',\n 'avalanche_problem_2_cause_id_prev2day',\n 'avalanche_problem_2_problem_type_id_prev2day',\n 'avalanche_problem_2_cause_id_prev1day',\n 'avalanche_problem_2_problem_type_id_prev1day',\n 'avalanche_problem_1_problem_type_id_prev1day',\n 'avalanche_problem_1_cause_id_prev1day',\n 'danger_level_prev1day'\n]\nfilled_ = [varsom_df[v].fillna(0., inplace=True) for v in fill_list]\nfilled_", "Eventually remove variables with many missing values.", "del_list = [\n 'danger_level_name_prev1day', 'danger_level_name_prev2day', 'danger_level_name_prev3day',\n 'mountain_weather_change_wind_direction',\n 'mountain_weather_change_hour_of_day_start',\n 'mountain_weather_change_hour_of_day_stop',\n 'mountain_weather_change_wind_speed',\n 'mountain_weather_fl_hour_of_day_stop',\n 'mountain_weather_fl_hour_of_day_start',\n 'latest_observations', 'publish_time', 'date_valid',\n 'mountain_weather_temperature_max_prev3day', 'mountain_weather_temperature_min_prev3day',\n 'mountain_weather_temperature_max_prev2day',\n 'mountain_weather_temperature_min_prev2day',\n 'mountain_weather_temperature_max_prev1day',\n 'mountain_weather_temperature_min_prev1day'\n]\nremoved_ = [varsom_df.pop(v) for v in del_list]", "Check again if there are still values missing...\nneed to replace these Nans with meaningful values or remove the feature.", "# Find the amount of NaN values in each column\nprint(varsom_df.isnull().sum().sort_values(ascending=False))\n\n# Compute the correlation matrix - works only on numerical variables.\ncorr = varsom_df.corr()\n\n# Generate a mask for the upper triangle\nmask = np.zeros_like(corr, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\n\n# Set up the matplotlib figure\nf, ax = plt.subplots(figsize=(11, 11))\n\n# Generate a custom diverging colormap\ncmap = sns.diverging_palette(1000, 15, as_cmap=True)\n\n# Draw the heatmap with the mask and correct aspect ratio\nsns.heatmap(corr, mask=mask, cmap=cmap, vmax=.8, center=0,\n square=True, linewidths=.5, cbar_kws={\"shrink\": .5})", "We can see that some parameters are highly correlated. These are mainly the parameters belonging to the same avalanche problem. Depending on the ML algorithm we use we have to remove some of them.", "#corr['avalanche_problem_1_cause_id'].sort_values(ascending=False)\n#corr\n\n#sns.pairplot(varsom_df.drop(['date_valid'], axis=1))\n\n# Get all numerical features\n\nnum_feat = varsom_df._get_numeric_data().columns\nnum_feat\n\n# let's see the details about remainig variables \n\nvarsom_df.describe()", "Save data for further analysis", "varsom_df.to_csv('varsom_ml_preproc_3y.csv', index_label='index')", "Now we have clean data and can build a model\nThe library we'll use is called sckit-learn. \nhttp://scikit-learn.org\n\nPython library\nAccess to well known machine learning algorithms\nBuilt on NumPy, SciPy, and matplotlib\nOpen Source\nWell documented with many good tutorials\n\nWorklflow of scikit-learn\n\nCreate model object\n.fit\n.predict\nevaluate" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<table align=\"left\">\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n\nOverview\nThis tutorial walks through building a custom container to serve a scikit-learn model on Vertex Predictions. You will use the FastAPI Python web server framework to create a prediction and health endpoint.\nYou will also cover incorporating a pre-processor from training into your online serving.\nDataset\nThis tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that\nmarks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica.\nThis tutorial uses the copy of the Iris dataset included in the\nscikit-learn library.\nObjective\nThe goal is to:\n- Train a model that uses a flower's measurements as input to predict what type of iris it is.\n- Save the model and its serialized pre-processor\n- Build a FastAPI server to handle predictions and health checks\n- Build a custom container with model artifacts\n- Upload and deploy custom container to Vertex Prediction\nThis tutorial focuses more on deploying this model with Vertex AI than on\nthe design of the model itself.\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\n\nLearn about Vertex AI\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets\nall the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements.\nYou need the following:\n\nDocker\nGit\nGoogle Cloud SDK (gcloud)\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Google Cloud guide to Setting up a Python development\nenvironment and the Jupyter\ninstallation guide provide detailed instructions\nfor meeting these requirements. The following steps provide a condensed set of\ninstructions:\n\n\nInstall and initialize the Cloud SDK.\n\n\nInstall Python 3.\n\n\nInstall\n virtualenv\n and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip install jupyter on the\ncommand-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstall additional packages\nInstall additional package dependencies not installed in your notebook environment, such as NumPy, Scikit-learn, FastAPI, Uvicorn, and joblib. Use the latest major GA version of each package.", "%%writefile requirements.txt\njoblib~=1.0\nnumpy~=1.20\nscikit-learn~=0.24\ngoogle-cloud-storage>=1.26.0,<2.0.0dev\n\n# Required in Docker serving container\n%pip install -U --user -r requirements.txt\n\n# For local FastAPI development and running\n%pip install -U --user \"uvicorn[standard]>=0.12.0,<0.14.0\" fastapi~=0.63\n\n# Vertex SDK for Python\n%pip install -U --user google-cloud-aiplatform", "Restart the kernel\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.", "# Automatically restart kernel after installs\nimport os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI API and Compute Engine API.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! or % as shell commands, and it interpolates Python variables with $ or {} into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "# Get your Google Cloud project ID from gcloud\nshell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null\n\ntry:\n PROJECT_ID = shell_output[0]\nexcept IndexError:\n PROJECT_ID = None\n\nprint(\"Project ID:\", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\n\n\nIn the Cloud Console, go to the Create service account key\n page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and\n click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex AI\"\ninto the filter box, and select\n Vertex AI Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nEnter the path to your service account key as the\nGOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebooks, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\") and not os.getenv(\n \"GOOGLE_APPLICATION_CREDENTIALS\"\n ):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Configure project and resource names", "REGION = \"us-central1\" # @param {type:\"string\"}\nMODEL_ARTIFACT_DIR = \"custom-container-prediction-model\" # @param {type:\"string\"}\nREPOSITORY = \"custom-container-prediction\" # @param {type:\"string\"}\nIMAGE = \"sklearn-fastapi-server\" # @param {type:\"string\"}\nMODEL_DISPLAY_NAME = \"sklearn-custom-container\" # @param {type:\"string\"}", "REGION - Used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Cloud\nVertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.\nMODEL_ARTIFACT_DIR - Folder directory path to your model artifacts within a Cloud Storage bucket, for example: \"my-models/fraud-detection/trial-4\"\nREPOSITORY - Name of the Artifact Repository to create or use.\nIMAGE - Name of the container image that will be pushed.\nMODEL_DISPLAY_NAME - Display name of Vertex AI Model resource.\nCreate a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nTo update your model artifacts without re-building the container, you must upload your model\nartifacts and any custom code to Cloud Storage.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Write your pre-processor\nScaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.\nCreate preprocess.py, which contains a class to do this scaling:", "%mkdir app\n\n%%writefile app/preprocess.py\nimport numpy as np\n\nclass MySimpleScaler(object):\n def __init__(self):\n self._means = None\n self._stds = None\n\n def preprocess(self, data):\n if self._means is None: # during training only\n self._means = np.mean(data, axis=0)\n\n if self._stds is None: # during training only\n self._stds = np.std(data, axis=0)\n if not self._stds.all():\n raise ValueError(\"At least one column has standard deviation of 0.\")\n\n return (data - self._means) / self._stds\n", "Train and store model with pre-processor\nNext, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn.\nAt the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file:", "%cd app/\n\nimport pickle\n\nimport joblib\nfrom preprocess import MySimpleScaler\nfrom sklearn.datasets import load_iris\nfrom sklearn.ensemble import RandomForestClassifier\n\niris = load_iris()\nscaler = MySimpleScaler()\n\nX = scaler.preprocess(iris.data)\ny = iris.target\n\nmodel = RandomForestClassifier()\nmodel.fit(X, y)\n\njoblib.dump(model, \"model.joblib\")\nwith open(\"preprocessor.pkl\", \"wb\") as f:\n pickle.dump(scaler, f)", "Upload model artifacts and custom code to Cloud Storage\nBefore you can deploy your model for serving, Vertex AI needs access to the following files in Cloud Storage:\n\nmodel.joblib (model artifact)\npreprocessor.pkl (model artifact)\n\nRun the following commands to upload your files:", "!gsutil cp model.joblib preprocessor.pkl {BUCKET_NAME}/{MODEL_ARTIFACT_DIR}/\n%cd ..", "Build a FastAPI server", "%%writefile app/main.py\nfrom fastapi import FastAPI, Request\n\nimport joblib\nimport json\nimport numpy as np\nimport pickle\nimport os\n\nfrom google.cloud import storage\nfrom preprocess import MySimpleScaler\nfrom sklearn.datasets import load_iris\n\n\napp = FastAPI()\ngcs_client = storage.Client()\n\nwith open(\"preprocessor.pkl\", 'wb') as preprocessor_f, open(\"model.joblib\", 'wb') as model_f:\n gcs_client.download_blob_to_file(\n f\"{os.environ['AIP_STORAGE_URI']}/preprocessor.pkl\", preprocessor_f\n )\n gcs_client.download_blob_to_file(\n f\"{os.environ['AIP_STORAGE_URI']}/model.joblib\", model_f\n )\n\nwith open(\"preprocessor.pkl\", \"rb\") as f:\n preprocessor = pickle.load(f)\n\n_class_names = load_iris().target_names\n_model = joblib.load(\"model.joblib\")\n_preprocessor = preprocessor\n\n\[email protected](os.environ['AIP_HEALTH_ROUTE'], status_code=200)\ndef health():\n return {}\n\n\[email protected](os.environ['AIP_PREDICT_ROUTE'])\nasync def predict(request: Request):\n body = await request.json()\n\n instances = body[\"instances\"]\n inputs = np.asarray(instances)\n preprocessed_inputs = _preprocessor.preprocess(inputs)\n outputs = _model.predict(preprocessed_inputs)\n\n return {\"predictions\": [_class_names[class_num] for class_num in outputs]}\n", "Add pre-start script\nFastAPI will execute this script before starting up the server. The PORT environment variable is set to equal AIP_HTTP_PORT in order to run FastAPI on same the port expected by Vertex AI.", "%%writefile app/prestart.sh\n#!/bin/bash\nexport PORT=$AIP_HTTP_PORT", "Store test instances to use later\nTo learn more about formatting input instances in JSON, read the documentation.", "%%writefile instances.json\n{\n \"instances\": [\n [6.7, 3.1, 4.7, 1.5],\n [4.6, 3.1, 1.5, 0.2]\n ]\n}", "Build and push container to Artifact Registry\nBuild your container\nOptionally copy in your credentials to run the container locally.", "# NOTE: Copy in credentials to run locally, this step can be skipped for deployment\n%cp $GOOGLE_APPLICATION_CREDENTIALS app/credentials.json", "Write the Dockerfile, using tiangolo/uvicorn-gunicorn-fastapi as a base image. This will automatically run FastAPI for you using Gunicorn and Uvicorn. Visit the FastAPI docs to read more about deploying FastAPI with Docker.", "%%writefile Dockerfile\n\nFROM tiangolo/uvicorn-gunicorn-fastapi:python3.7\n\nCOPY ./app /app\nCOPY requirements.txt requirements.txt\n\nRUN pip install -r requirements.txt", "Build the image and tag the Artifact Registry path that you will push to.", "!docker build \\\n --tag={REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} \\\n .", "Run and test the container locally (optional)\nRun the container locally in detached mode and provide the environment variables that the container requires. These env vars will be provided to the container by Vertex Prediction once deployed. Test the /health and /predict routes, then stop the running image.", "!docker rm local-iris\n!docker run -d -p 80:8080 \\\n --name=local-iris \\\n -e AIP_HTTP_PORT=8080 \\\n -e AIP_HEALTH_ROUTE=/health \\\n -e AIP_PREDICT_ROUTE=/predict \\\n -e AIP_STORAGE_URI={BUCKET_NAME}/{MODEL_ARTIFACT_DIR} \\\n -e GOOGLE_APPLICATION_CREDENTIALS=credentials.json \\\n {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}\n\n!curl localhost/health\n\n!curl -X POST \\\n -d @instances.json \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n localhost/predict\n\n!docker stop local-iris", "Push the container to artifact registry\nConfigure Docker to access Artifact Registry. Then push your container image to your Artifact Registry repository.", "!gcloud beta artifacts repositories create {REPOSITORY} \\\n --repository-format=docker \\\n --location=$REGION\n\n!gcloud auth configure-docker {REGION}-docker.pkg.dev\n\n!docker push {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}", "Deploy to Vertex AI\nUse the Python SDK to upload and deploy your model.\nUpload the custom container model", "from google.cloud import aiplatform\n\naiplatform.init(project=PROJECT, location=REGION)\n\nmodel = aiplatform.Model.upload(\n display_name=MODEL_DISPLAY_NAME,\n artifact_uri=f\"{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}\",\n serving_container_image_uri=f\"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}\",\n)", "Deploy the model on Vertex AI\nAfter this step completes, the model is deployed and ready for online prediction.", "endpoint = model.deploy(machine_type=\"n1-standard-4\")", "Send predictions\nUsing Python SDK", "endpoint.predict(instances=[[6.7, 3.1, 4.7, 1.5], [4.6, 3.1, 1.5, 0.2]])", "Using REST", "ENDPOINT_ID = endpoint.name\n\n! curl \\\n-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n-H \"Content-Type: application/json\" \\\n-d @instances.json \\\nhttps://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:predict", "Using gcloud CLI", "!gcloud beta ai endpoints predict $ENDPOINT_ID \\\n --region=$REGION \\\n --json-request=instances.json", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:", "# Undeploy model and delete endpoint\nendpoint.delete(force=True)\n\n# Delete the model resource\nmodel.delete()\n\n# Delete the container image from Artifact Registry\n!gcloud artifacts docker images delete \\\n --quiet \\\n --delete-tags \\\n {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dataDogma/Computer-Science
Courses/DAT-208x/DAT208X - Week 4 - Section 1 - Numpy.ipynb
gpl-3.0
[ "List Recap\n\n\n\nPowerful\n\n\nCOllection of values\n\n\nHold different types\n\n\nChange, add, remove\n\n\n\nThe Problem:\nBut there's one feature is missing, when analyzing data, the need for Data Science is to:\n\n\nPerform mathematical operations over collections of values.\n\n\nSpeed\n\n\nUnfortunatly list don't support both of these issues and here's why:\ne.g:", "# some random heights of the family\nheight = [1.75, 1.65, 1.71, 1.89, 1.79]\n\n# some random weights of the family\nweight = [65.4, 59.2, 63.6, 88.4, 68.7]\n\n# Now if we go to calculate BMI\nweight / height ** 2", "Solution : Numpy\n\n\n\nNumric Python or simply \"numpy\".\n\n\nAn alternative to python list: Numpy Array.\n\n\ncalculation is performed over entire arrays( element wise )\n\n\nEasy and Fast.\n\n\n\nImporting Numpy\nSyntax: import numpy", "import numpy as np # selective import\n\n# Convet the followoing list to numpy arrays\nheight = [1.75, 1.65, 1.71, 1.89, 1.79]\n\nweight = [65.4, 59.2, 63.6, 88.4, 68.7]\n\nnp_height = np.array( height )\nnp_weight = np.array( weight )\n\n# Let's confirm this as numpy arrray\ntype(np_height)\ntype(np_weight)\n\nbmi = np_weight / np_height ** 2\nbmi", "Note:\n\n\n\nNumpy assumes that your array contain elements of same type.\n\n\nIf the arary contains elements of differnet types, then resulitng numpy array will converted to type string.\n\n\nNumpy array should'nt be missclassified as an array, technically it a \"new data type\", just like int, string, float or boolean, and:\n\n\nComes packaged with it's own methods.\n\n\ni.e. It can behave differently than you'd expect.", "# A numpy arary with different types\nnp.array( [1, 2.5, \"are different\", True ] )", "Numpy : remarks", "# a simple python list\npy_list = [ 1, 2, 3 ]\n\n# a numpy array\nnumpy_array = np.array([1, 2, 3])\n\n\"\"\" \nremarks:\n\n+ If we add py_list with itself, it will generate a list of\n new length.\n \n+ Whereas, if we add the numpy_array, it would perform,\n \"element wise addition\"\n \nWarning: \n\nAgain be careful while using different python types in a numpy arary.\n \n\"\"\"\npy_list + py_list\n\nnumpy_array + numpy_array", "Numpy Subsetting\n\nAll the subsetting operation on a list, also get's performed on\nNumpy arrays, except for a few minor change, we look them now.", "bmi\n\n# get the fourth elemnt from the numpy array \"bmi\"\nprint(\"The bmi of the fourth element is: \" + str( bmi[3] ) )\n\n# slice and dice\nprint(\"\\nThe bmi's from 2nd to 3rd element is: \" + str( bmi[2 : 4] ) )\n\n\"\"\" \n\n Specifically for Numpy, there's another way to do list\n subsetting via \"booleans\", here's how.\n\n\"\"\"\n\nprint(\"\\nList of bmi have bmi larger than 23: \" + str( bmi > 23 ) )\n\n# Next, use this boolean arary to do subsetting\n\nprint(\"\\nThe element with the largest bmi is: \" + str(bmi[ bmi > 23 ]) )", "Exercise :\n\nRQ1: Which Numpy function do you use to create an array?\nAns: array()\n\nRQ2: Which two statements describe the advantage of Numpy Package over regular Python Lists?\nAns: \n\n\nThe Numpy Package provides the array, a data type that can be used to do element-wise calculations. \n\n\nBecause Numpy arrays can only hold element of a single type,\n\ncalculations on Numpy arrays can be carried out way faster than regular Python lists.\n\n\n\n\nRQ3: What is the resulting Numpy array z after executing the following lines of code?\nimport numpy as np\n x = np.array([1, 2, 3])\n y = np.array([3, 2, 1])\n z = x + y\nAns: array( [4, 4, 4] )\n\nRQ4: What happens when you put an integer, a Boolean, and a string in the same Numpy array using the array() function?\nAns: An array element is converted to string.\nLab : Numpy\n\nObjective:\n\n\nParctice with Numpy\n\n\nPerform Calculations with it.\n\n\nUnderstand subtle difference b/w Numpy arrays and Python list.\n\n\n\nList of lab exercises:\n\n\n\nYour first Numpy Arary -- 100xp, status : earned\n\n\nBaseball's player's height -- 100xp, status : earned\n\n\nLightweight baseball players -- 100xp, status : earned\n\n\nNumpy Side Effects -- 50xp, status : earned\n\n\nSubsetting Numpy Arrays -- 100xp, status : earned\n\n\n\n1. Your First Numpy array", "\"\"\"\nInstructions: \n\n + Import the \"numpy\" package as \"np\", so that you can refer to \"numpy\" with \"np\".\n \n + Use \"np.array()\" to create a Numpy array from \"baseball\". Name this array \"np_baseball\".\n \n + Print out the \"type of np_baseball\" to check that you got it right.\n\n\"\"\"\n# Create list baseball \nbaseball = [180, 215, 210, 210, 188, 176, 209, 200]\n\n# Import the numpy package as np\nimport numpy as np\n\n# Create a Numpy array from baseball: np_baseball\nnp_baseball = np.array(baseball)\nprint(np_baseball)\n\n# Print out type of np_baseball\nprint(type( np_baseball) )", "2. Baseball player's height\n\nPreface:\nYou are a huge baseball fan. You decide to call the MLB (Major League Baseball) and ask around for some more statistics on the height of the main players. They pass along data on more than a thousand players, which is stored as a regular Python list: height. The height is expressed in inches. Can you make a Numpy array out of it and convert the units to centimeters?", "\"\"\"\nInstructions:\n\n + Create a Numpy array from height. Name this new array np_height.\n\n + Print \"np_height\".\n \n + Multiply \"np_height\" with 0.0254 to convert all height measurements from inches to meters. \n \n - Store the new values in a new array, \"np_height_m\".\n \n + Print out np_height_m and check if the output makes sense.\n\n\"\"\"\n\n# height is available as a regular list\n# http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_MLB_HeightsWeights#References\n\n# Import numpy\nimport numpy as np\n\n# Create a Numpy array from height: np_height\nnp_height = np.array( height )\n\n# Print out np_height\nprint(\"The Height of the baseball players are: \" + str( np_height ) )\n\n# Convert np_height to m: np_height_m\nnp_height_m = np_height * 0.0254 # a inch is 0.0245 meters\n\n# Print np_height_m\nprint(\"\\nThe Height of the baseball players in meters are: \" + str( np_height_m ) )", "3. Baseball player's BMI:\n\nPreface: \nThe MLB also offers to let you analyze their weight data. Again, both are available as regular Python lists: height and weight. height is in inches and weight is in pounds.\nIt's now possible to calculate the BMI of each baseball player. Python code to convert height to a Numpy array with the correct units is already available in the workspace. Follow the instructions step by step and finish the game!", "\"\"\"\nInstructions:\n\n + Create a Numpy array from the weight list with the correct units.\n \n - Multiply by 0.453592 to go from pounds to kilograms. \n \n - Store the resulting Numpy array as np_weight_kg.\n \n + Use np_height_m and np_weight_kg to calculate the BMI of each player. \n \n - Use the following equation: \n \n BMI = weight( kg ) / height( m )\n \n - Save the resulting numpy array as \"bmi\".\n \n + Print out \"bmi\".\n \n\"\"\"\n# height and weight are available as a regular lists\n\n# Import numpy\nimport numpy as np\n\n# Create array from height with correct units: np_height_m\nnp_height_m = np.array(height) * 0.0254\n\n# Create array from weight with correct units: np_weight_kg \nnp_weight_kg = np.array( weight ) * 0.453592\n\n# Calculate the BMI: bmi\nbmi = np_weight_kg / np_height_m ** 2\n\n# Print out bmi\nprint(\"\\nThe Bmi of all the baseball players are: \" + str( bmi ) )\n", "4. Leightweight baseball players:\n\nTo subset both regular Python lists and Numpy arrays, you can use square brackets:\nx = [4 , 9 , 6, 3, 1]\n x[1]\n import numpy as np\n y = np.array(x)\n y[1]\nFor Numpy specifically, you can also use boolean Numpy arrays:\nhigh = y &gt; 5\n y[high]", "\"\"\" \nInstructions:\n\n + Create a boolean Numpy array:\n \n - the element of the array should be \"True\",\n \n - If the corresponding baseball player's BMI is below 21.\n \n - You can use the \"<\" operator for this\n \n - Name the array \"light\", Print the array \"light\".\n \n \n + Print out a Numpy array with the BMIs of all baseball players whose BMI is below 21. \n \n - Use \"light\" inside square brackets to do a selection on the bmi array.\n\"\"\"\n# height and weight are available as a regular lists\n\n# Import numpy\nimport numpy as np\n\n# Calculate the BMI: bmi\nnp_height_m = np.array(height) * 0.0254\nnp_weight_kg = np.array(weight) * 0.453592\nbmi = np_weight_kg / (np_height_m ** 2)\n\n# Create the light array\nlight = np.array( bmi < 21 )\n\n# Print out light\nprint(\"\\nLightweight baseball players\" + str( light ) )\n\n# Print out BMIs of all baseball players whose BMI is below 21\nprint(bmi[ light < 21 ])", "5. Numpy Side Effect:\n\nPreface:\n\n\nNumpy arrays cannot contain elements with different types.\n\n\nIf you try to build such a list, some of the elments' types are changed to end up with a homogenous list.\n\nThis is known as type coercion. \n\n\n\nSecond, the typical arithmetic operators,\nsuch as +, -, * and / have a different meaning for regular Python lists and Numpy arrays.\n\n\n\nHave a look at this line:\n```In [1]: np.array([True, 1, 2]) + np.array([3, 4, False])\n Out[1]: array([4, 5, 2])```\n\nHere, the + operator is summing Numpy arrays element wise, as a result, the True element ~ 1 as integer, get's added to 3, a int to give off 4, only to be later converted to a string. Same happens with all the othere two numbers.\nWhich code chunk builds the exact same Python data structure?\nAns: np.array([4, 3, 0]) + np.array([0, 2, 2]).\n\n6. Subsetting Numpy Arrays:\n\nLuckily, subsetting the two, i.e. \"Python list\" and \"Numpy arrays\" behave similar while subsetting, wohoooo!", "\"\"\"\nInstructions:\n\n + Subset np_weight: print out the element at index 50.\n \n + Print out a sub-array of np_height: It contains the elements at index 100 up to and including index 110\n\"\"\"\n\n# height and weight are available as a regular lists\n\n# Import numpy\nimport numpy as np\n\n# Store weight and height lists as numpy arrays\nnp_weight = np.array(weight)\nnp_height = np.array(height)\n\n# Print out the weight at index 50\n# Ans: print(np_weight[50])\n\n# Print out sub-array of np_height: index 100 up to and including index 110\n# Ans: print(np_height[100 : 111])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
manoharan-lab/structural-color
montecarlo_tutorial.ipynb
gpl-3.0
[ "Tutorial for the montecarlo module of the structural-color python package\nCopyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson\nThis file is part of the structural-color python package.\nThis package is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\nThis package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\nYou should have received a copy of the GNU General Public License along with this package. If not, see http://www.gnu.org/licenses/.\nLoading and using the package and module\nTo load, make sure you are in the top directory and do", "import numpy as np\nimport matplotlib.pyplot as plt\nimport structcol as sc\nimport structcol.refractive_index as ri\nfrom structcol import montecarlo as mc\nfrom structcol import detector as det\nfrom structcol import model\n\n# For Jupyter notebooks only:\n%matplotlib inline", "Run photon packets in parallel plane (film) medium\nThis is an example code to run a Monte Carlo calculation for photon packets travelling in a scattering medium.\nSet random number seed. This is so that the code produces the same trajectories each time (for testing purposes). Comment this out or set the seed to None for real calculations.", "seed = 1\n\n# Properties of system\nntrajectories = 100 # number of trajectories\nnevents = 100 # number of scattering events in each trajectory\nwavelen = sc.Quantity('600 nm') # wavelength for scattering calculations\nradius = sc.Quantity('0.125 um') # particle radius\nvolume_fraction = sc.Quantity(0.5, '') # volume fraction of particles\nn_particle = sc.Quantity(1.54, '') # refractive indices can be specified as pint quantities or\nn_matrix = ri.n('vacuum', wavelen) # called from the refractive_index module. n_matrix is the \nn_medium = ri.n('vacuum', wavelen) # space within sample. n_medium is outside the sample. \n # n_particle and n_matrix can have complex indices if absorption is desired\nn_sample = ri.n_eff(n_particle, # refractive index of sample, calculated using Bruggeman approximation\n n_matrix, \n volume_fraction) \nboundary = 'film' # geometry of sample, can be 'film' or 'sphere', see below for tutorial \n # on sphere case\nincidence_theta_min = sc.Quantity(0, 'rad') # min incidence angle of illumination (should be >=0 and < pi/2)\nincidence_theta_max = sc.Quantity(0, 'rad') # max incidence angle of illumination (should be >=0 and < pi/2)\n # (in this case, all trajectories hit the sample normally to the surface)\nincidence_phi_min = sc.Quantity(0, 'rad') # min incidence angle of illumination (should be >=0 and <= pi/2)\nincidence_phi_max = sc.Quantity(2*np.pi, 'rad') # max incidence angle of illumination (should be >=0 and <= pi/2)\n\n#%%timeit\n# Calculate the phase function and scattering and absorption coefficients from the single scattering model\np, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen, mie_theory=False)\n\n# Initialize the trajectories\nr0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, \n incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, \n incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max, \n incidence_theta_data = None, incidence_phi_data = None)\n # We can input specific incidence angles for each trajectory by setting \n # incidence_theta_data or incidence_phi_data to not None. This can be useful if we \n # have BRDF data on a specific material, and we want to model how light would reflect\n # off said material into a structurally colored film. The incidence angle data can be \n # Quantity arrays, but if they aren't, the values must be in radians. \nr0 = sc.Quantity(r0, 'um')\nk0 = sc.Quantity(k0, '')\nW0 = sc.Quantity(W0, '')\n\n# Generate a matrix of all the randomly sampled angles first \nsintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n\n# Create step size distribution\nstep = mc.sample_step(nevents, ntrajectories, mu_scat)\n \n# Create trajectories object\ntrajectories = mc.Trajectory(r0, k0, W0)\n\n# Run photons\ntrajectories.absorb(mu_abs, step) \ntrajectories.scatter(sintheta, costheta, sinphi, cosphi) \ntrajectories.move(step)", "Plot trajectories", "trajectories.plot_coord(ntrajectories, three_dim=True)", "Calculate the fraction of trajectories that are reflected and transmitted", "thickness = sc.Quantity('50 um') # thickness of the sample film\n\nreflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary)\n\nprint('Reflectance = '+ str(reflectance))\nprint('Transmittance = '+ str(transmittance))\nprint('Absorption coefficient = ' + str(mu_abs))", "Add absorption to the system (in the particle and/or in the matrix)\nHaving absorption the particle or in the matrix implies that their refractive indices are complex (have a non-zero imaginary component). To include the effect of this absorption into the calculations, we just need to specify the complex refractive index in n_particle and/or n_matrix. Everything else remains the same as for the non-absorbing case.", "# Properties of system\nn_particle = sc.Quantity(1.54 + 0.001j, '') \nn_matrix = ri.n('vacuum', wavelen) + 0.0001j \nn_sample = ri.n_eff(n_particle, n_matrix, volume_fraction) \n\n# Calculate the phase function and scattering and absorption coefficients from the single scattering model\np, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen)\n\n# Initialize the trajectories\nr0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, \n incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, \n incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max)\nr0 = sc.Quantity(r0, 'um')\nk0 = sc.Quantity(k0, '')\nW0 = sc.Quantity(W0, '')\n\n# Generate a matrix of all the randomly sampled angles first \nsintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n\n# Create step size distribution\nstep = mc.sample_step(nevents, ntrajectories, mu_scat)\n \n# Create trajectories object\ntrajectories = mc.Trajectory(r0, k0, W0)\n\n# Run photons\ntrajectories.absorb(mu_abs, step) \ntrajectories.scatter(sintheta, costheta, sinphi, cosphi) \ntrajectories.move(step)\n\n# Calculate the fraction of reflected and transmitted trajectories\nthickness = sc.Quantity('50 um')\n\nreflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary)\n\nprint('Reflectance = '+ str(reflectance))\nprint('Transmittance = '+ str(transmittance))", "As expected, the reflected fraction decreases if the system is absorbing. \nCalculate the reflectance for a system of core-shell particles\nWhen the system is made of core-shell particles, we must specify the refractive index, radius, and volume fraction of each layer, from innermost to outermost. \nThe reflectance is normalized, so it goes from 0 to 1.", "# Properties of system\nntrajectories = 100 # number of trajectories\nnevents = 100 # number of scattering events in each trajectory\nwavelen = sc.Quantity('600 nm')\nradius = sc.Quantity(np.array([0.125, 0.13]), 'um') # specify the radii from innermost to outermost layer\nn_particle = sc.Quantity(np.array([1.54,1.33]), '') # specify the index from innermost to outermost layer \nn_matrix = ri.n('vacuum', wavelen) \nn_medium = ri.n('vacuum', wavelen) \nvolume_fraction = sc.Quantity(0.5, '') # this is the volume fraction of the core-shell particle as a whole\nboundary = 'film' # geometry of sample\n\n# Calculate the volume fractions of each layer\nvf_array = np.empty(len(radius))\nr_array = np.array([0] + radius.magnitude.tolist()) \nfor r in np.arange(len(r_array)-1):\n vf_array[r] = (r_array[r+1]**3-r_array[r]**3) / (r_array[-1:]**3) * volume_fraction.magnitude\n\nn_sample = ri.n_eff(n_particle, n_matrix, vf_array) \n\n#%%timeit\n# Calculate the phase function and scattering and absorption coefficients from the single scattering model\n# (this absorption coefficient is of the scatterer, not of an absorber added to the system)\np, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen)\n\n# Initialize the trajectories\nr0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, \n incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, \n incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max)\nr0 = sc.Quantity(r0, 'um')\nk0 = sc.Quantity(k0, '')\nW0 = sc.Quantity(W0, '')\n\n# Generate a matrix of all the randomly sampled angles first \nsintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n\n# Create step size distribution\nstep = mc.sample_step(nevents, ntrajectories, mu_scat)\n \n# Create trajectories object\ntrajectories = mc.Trajectory(r0, k0, W0)\n\n# Run photons\ntrajectories.absorb(mu_abs, step) \ntrajectories.scatter(sintheta, costheta, sinphi, cosphi) \ntrajectories.move(step)\n\n# Calculate the reflection and transmission fractions\nthickness = sc.Quantity('50 um')\n\nreflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary)\n\nprint('Reflectance = '+ str(reflectance))\nprint('Transmittance = '+ str(transmittance))", "Calculate the reflectance for a polydisperse system\nWe can calculate the reflectance of a polydisperse system with either one or two species of particles, meaning that there are one or two mean radii, and each species has its own size distribution. We then need to specify the mean radius, the polydispersity index (pdi), and the concentration of each species. For example, consider a bispecies system of 90$\\%$ of 200 nm polystyrene particles and 10$\\%$ of 300 nm particles, with each species having a polydispersity index of 1$\\%$. In this case, the mean radii are [200, 300] nm, the pdi are [0.01, 0.01], and the concentrations are [0.9, 0.1]. \nIf the system is monospecies, we still need to specify the polydispersity parameters in 2-element arrays. For example, the mean radii become [200, 200] nm, the pdi become [0.01, 0.01], and the concentrations become [1.0, 0.0].\nTo run the code for polydisperse systems, we just need to specify the parameters accounting for polydispersity when calling 'mc.calc_scat()'. \nTo include absorption into the polydisperse system calculation, we just need to use the complex refractive index of the particle and/or the matrix. \nThe reflectance is normalized, so it goes from 0 to 1. \nNote: the code currently does not handle polydispersity for systems of core-shell particles.", "# Properties of system\nn_particle = sc.Quantity(1.54, '') \nn_matrix = ri.n('vacuum', wavelen) \nn_sample = ri.n_eff(n_particle, n_matrix, volume_fraction) \n\n# define the parameters for polydispersity\nradius = sc.Quantity('125 nm')\nradius2 = sc.Quantity('150 nm')\nconcentration = sc.Quantity(np.array([0.9,0.1]), '')\npdi = sc.Quantity(np.array([0.01, 0.01]), '')\n\n# Calculate the phase function and scattering and absorption coefficients from the single scattering model\n# Need to specify extra parameters for the polydisperse (and bispecies) case\np, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen, \n radius2=radius2, concentration=concentration, pdi=pdi, polydisperse=True)\n\n# Initialize the trajectories\nr0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, \n incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, \n incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max)\nr0 = sc.Quantity(r0, 'um')\nk0 = sc.Quantity(k0, '')\nW0 = sc.Quantity(W0, '')\n\n# Generate a matrix of all the randomly sampled angles first \nsintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n\n# Create step size distribution\nstep = mc.sample_step(nevents, ntrajectories, mu_scat)\n \n# Create trajectories object\ntrajectories = mc.Trajectory(r0, k0, W0)\n\n# Run photons\ntrajectories.absorb(mu_abs, step) \ntrajectories.scatter(sintheta, costheta, sinphi, cosphi) \ntrajectories.move(step)\n\n# Calculate the reflection and transmission fractions\nthickness = sc.Quantity('50 um')\n\nreflectance, transmittance = det.calc_refl_trans(trajectories, thickness, n_medium, n_sample, boundary)\n\nprint('Reflectance = '+ str(reflectance))\nprint('Transmittance = '+ str(transmittance))", "Calculate the reflectance for a sample with surface roughness\nTwo classes of surface roughnesses are implemented in the model:\n1) When the surface roughness is high compared to the wavelength of light, we assume that light “sees” a nanoparticle before “seeing” the sample as an effective medium. The photons take a step based on the scattering length determined by the nanoparticle Mie resonances, without including the structure factor. After this first step, the photons are inside the sample and proceed to get scattered by the sample as an effective medium. We call this type of roughness \"fine\", and we input a fine_roughness parameter that is the fraction of the surface covered by \"fine\" roughness. For example, a fine_roughness of 0.3 means that 30% of incident light will hit fine surface roughness (e.g. will \"see\" a Mie scatterer first). The rest of the light will see a smooth surface, which could be flat or have coarse roughness. The fine_roughness parameter must be between 0 and 1. \n2) When the surface roughness is low relative to the wavelength, we can assume that light encounters a locally smooth surface with a slope relative to the z=0 plane. The model corrects the Fresnel reflection and refraction to account for the different angles of incidence due to the roughness. The coarse_roughness parameter is the rms slope of the surface and should be larger than 0. There is no upper bound, but when the coarse roughness tends to infinity, the surface becomes too \"spiky\" and light can no longer hit it, which reduces the reflectance down to 0. \nTo run the code with either type of surface roughness, the following functions are called differently:\n\n\ncalc_scat(): to include fine roughness, need to input fine_roughness > 0. In this case, it returns a 2-element mu_scat, with the first element being the scattering coefficient of the sample as a whole, and the second being the scattering coefficient from Mie theory. If fine_roughness=0, the function returns only the first scattering coefficient in a calculation without roughness.\n\n\ninitialize(): to include coarse roughness, need to input coarse_roughness > 0, in which case the function returns kz0_rot and kz0_refl that are needed for calc_refl_trans(). \n\n\nsample_step(): to include fine roughness, need to input fine_roughness > 0.\n\n\ncalc_refl_trans(): to include coarse roughness, need to input kz0_rot and kz0_refl from initialize(). To include fine roughness, need to input fine_roughness and n_matrix.\n\n\n$\\textbf{Note 1:}$ to reiterate, fine_roughness + coarse_roughness can add up to more than 1. Coarse roughness is how much coarse roughness there is on the surface, and it can be larger than 1. The larger the value, the larger the slopes on the surface. Fine roughness is what fraction of the surface is covered by fine surface roughness so it must be between 0 and 1. Both types of roughnesses can be included together or separately into the calculation. \n$\\textbf{Note 2:}$ Surface roughness has not yet been implemented to work with spherical boundary conditions.", "# Properties of system\nntrajectories = 100 # number of trajectories\nnevents = 100 # number of scattering events in each trajectory\nwavelen = sc.Quantity('600 nm') \nradius = sc.Quantity('0.125 um')\nvolume_fraction = sc.Quantity(0.5, '')\nn_particle = sc.Quantity(1.54, '') # refractive indices can be specified as pint quantities or\nn_matrix = ri.n('vacuum', wavelen) # called from the refractive_index module. n_matrix is the \nn_medium = ri.n('vacuum', wavelen) # space within sample. n_medium is outside the sample. \n # n_particle and n_matrix can have complex indices if absorption is desired\nboundary = 'film' # geometry of sample, can be 'film' or 'sphere'\nn_sample = ri.n_eff(n_particle, n_matrix, volume_fraction) \n\n# Need to specify fine_roughness and coarse_roughness \nfine_roughness = sc.Quantity(0.6, '')\ncoarse_roughness = sc.Quantity(1.1, '')\n\n# Need to specify fine roughness parameter in this function\np, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen, \n fine_roughness=fine_roughness, n_matrix=n_matrix)\n\n# The output of mc.initialize() depends on whether there is coarse roughness or not\nif coarse_roughness > 0.:\n r0, k0, W0, kz0_rotated, kz0_reflected = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary,\n seed=seed, incidence_theta_min = incidence_theta_min, \n incidence_theta_max = incidence_theta_max, \n incidence_phi_min = incidence_phi_min, \n incidence_phi_max = incidence_phi_max,\n coarse_roughness=coarse_roughness)\nelse: \n r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, seed=seed, \n incidence_theta_min = incidence_theta_min, incidence_theta_max = incidence_theta_max, \n incidence_phi_min = incidence_phi_min, incidence_phi_max = incidence_phi_max,\n coarse_roughness=coarse_roughness)\n kz0_rotated = None\n kz0_reflected = None\n \nr0 = sc.Quantity(r0, 'um')\nk0 = sc.Quantity(k0, '')\nW0 = sc.Quantity(W0, '')\n\nsintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n\n# Need to specify the fine roughness parameter in this function\nstep = mc.sample_step(nevents, ntrajectories, mu_scat, fine_roughness=fine_roughness)\n \ntrajectories = mc.Trajectory(r0, k0, W0)\ntrajectories.absorb(mu_abs, step) \ntrajectories.scatter(sintheta, costheta, sinphi, cosphi) \ntrajectories.move(step)\n\nz_low = sc.Quantity('0.0 um')\ncutoff = sc.Quantity('50 um')\n\n# If there is coarse roughness, need to specify kz0_rotated and kz0_reflected. \nreflectance, transmittance = det.calc_refl_trans(trajectories, cutoff, n_medium, n_sample, boundary,\n kz0_rot=kz0_rotated, \n kz0_refl=kz0_reflected)\nprint('R = '+ str(reflectance))\nprint('T = '+ str(transmittance))", "Run photon packets in a medium with a spherical boundary\nExample code to run a Monte Carlo calculation for photon packets travelling in a sample with a spherical boundary\nThere are only a few subtle differences between running the basic Monte Carlo calculation for a sphere and a film:\n1. Set boundary='sphere' instead of 'film'\n2. After initialization, multiply r0 by assembly_diameter/2. This corresponds to a spot size that is equal to the size of the sphere. \n3. Assembly_diameter is passed for sphere where thickness is passed for film\nThe sphere also has a few extra options for more complex Monte Carlo simulations, and more plotting options that \nallow you to visually check the results.\n\n\ninitialize():\n When the argument boundary='sphere', you can set plot_initial=True to see the initial positions on of the trajectories on the sphere. The blue arrows show the original directions of the incident light, and the green arrows show the directions after correction for refraction. For sphere boundary, incidence angle currently must be 0.\n\n\ncalc_refl_trans():\n when argument plot_exits=True, the function plots the reflected and transmitted trajectory exits from the sphere. Blue dots mark the last trajectory position inside the sphere, before exiting. The red dots mark the intersection of the trajectory with the sphere surface. The green dots mark the trajectory position outside the sphere, just after exiting.\n\n\nCalculate reflectance for a sphere sample", "# Properties of system\nntrajectories = 100 # number of trajectories\nnevents = 100 # number of scattering events in each trajectory\nwavelen = sc.Quantity('600 nm') # wavelength for scattering calculations\nradius = sc.Quantity('0.125 um') # particle radius\nassembly_diameter = sc.Quantity('10 um')# diameter of sphere assembly \nvolume_fraction = sc.Quantity(0.5, '') # volume fraction of particles\nn_particle = sc.Quantity(1.54, '') # refractive indices can be specified as pint quantities or\nn_matrix = ri.n('vacuum', wavelen) # called from the refractive_index module. n_matrix is the \nn_medium = ri.n('vacuum', wavelen) # space within sample. n_medium is outside the sample. \n # n_particle and n_matrix can have complex indices if absorption is desired\nn_sample = ri.n_eff(n_particle, # refractive index of sample, calculated using Bruggeman approximation\n n_matrix, \n volume_fraction)\nboundary = 'sphere' # geometry of sample, can be 'film' or 'sphere'\n\n# Calculate the phase function and scattering and absorption coefficients from the single scattering model\n# (this absorption coefficient is of the scatterer, not of an absorber added to the system)\np, mu_scat, mu_abs = mc.calc_scat(radius, n_particle, n_sample, volume_fraction, wavelen)\n\n# Initialize the trajectories for a sphere\n# set plot_initial to True to see the initial positions of trajectories. The default value of plot_initial is False\nr0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium, n_sample, boundary, \n plot_initial = True, \n sample_diameter = assembly_diameter, \n spot_size = assembly_diameter)\n\n# make positions, directions, and weights into quantities with units\nr0 = sc.Quantity(r0, 'um')\nk0 = sc.Quantity(k0, '')\nW0 = sc.Quantity(W0, '')\n\n# Generate a matrix of all the randomly sampled angles first \nsintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)\n\n# Create step size distribution\nstep = mc.sample_step(nevents, ntrajectories, mu_scat)\n#print(step)\n# Create trajectories object\ntrajectories = mc.Trajectory(r0, k0, W0)\n\n# Run photons\ntrajectories.absorb(mu_abs, step) \ntrajectories.scatter(sintheta, costheta, sinphi, cosphi) \ntrajectories.move(step)\n\n# Calculate reflectance and transmittance\n# Set plot_exits to true to plot positions of trajectories just before (red) and after (green) exit.\n# The default value of plot_exits is False.\n# The default value of run_tir is True, so you must set it to False to exclude the fresnel reflected trajectories. \nreflectance, transmittance = det.calc_refl_trans(trajectories, assembly_diameter, n_medium, n_sample, boundary, \n plot_exits = True)\n\nprint('Reflectance = '+ str(reflectance))\nprint('Transmittance = '+ str(transmittance))", "For spherical boundaries, there tends to be more light reflected back into the film upon an attempted exit, due to Fresnel reflection (this includes both total internal reflection and partial reflections). We've addressed this problem by including the option to re-run these Fresnel reflected trajectories as new Monte Carlo trajectories. \nTo re-run these trajectory components as new Monte Carlo trajectories, there are a few extra arguments that you must include in calc_refl_trans()\n\n\nrun_fresnel_traj = True\n<br> This boolean tells calc_refl_trans() that we want to re-run the Fresenl reflected trajectories\n\n\nmu_abs = mu_abs, mu_scat=mu_scat, p=p\n<br> These values are needed because when run_fresnel_traj=True, a new Monte Carlo simulation is calculated, which requires scattering calculations\n\n\nCalculate reflectance for a sphere sample, re-running the Fresnel reflected components of trajectories", "# Calculate reflectance and transmittance\n# The default value of plot_exits is False, so you need not set it to avoid plotting trajectories.\n# The default value of run_tir is True, so you need not set it to include fresnel reflected trajectories.\nreflectance, transmittance = det.calc_refl_trans(trajectories, assembly_diameter, n_medium, n_sample, boundary, \n run_fresnel_traj = True, \n mu_abs=mu_abs, mu_scat=mu_scat, p=p)\n\nprint('Reflectance = '+ str(reflectance))\nprint('Transmittance = '+ str(transmittance))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
iemejia/incubator-beam
examples/notebooks/get-started/try-apache-beam-java.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/get-started/try-apache-beam-java.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nTry Apache Beam - Java\nIn this notebook, we set up a Java development environment and work through a simple example using the DirectRunner. You can explore other runners with the Beam Capatibility Matrix.\nTo navigate through different sections, use the table of contents. From View drop-down list, select Table of contents.\nTo run a code cell, you can click the Run cell button at the top left of the cell, or by select it and press Shift+Enter. Try modifying a code cell and re-running it to see what happens.\nTo learn more about Colab, see Welcome to Colaboratory!.\nSetup\nFirst, you need to set up your environment.", "# Run and print a shell command.\ndef run(cmd):\n print('>> {}'.format(cmd))\n !{cmd} # This is magic to run 'cmd' in the shell.\n print('')\n\n# Copy the input file into the local filesystem.\nrun('mkdir -p data')\nrun('gsutil cp gs://dataflow-samples/shakespeare/kinglear.txt data/')", "Installing development tools\nLet's start by installing Java. We'll use the default-jdk, which uses OpenJDK. This will take a while, so feel free to go for a walk or do some stretching.\nNote: Alternatively, you could install the propietary Oracle JDK instead.", "# Update and upgrade the system before installing anything else.\nrun('apt-get update > /dev/null')\nrun('apt-get upgrade > /dev/null')\n\n# Install the Java JDK.\nrun('apt-get install default-jdk > /dev/null')\n\n# Check the Java version to see if everything is working well.\nrun('javac -version')", "Now, let's install Gradle, which we'll need to automate the build and running processes for our application. \nNote: Alternatively, you could install and configure Maven instead.", "import os\n\n# Download the gradle source.\ngradle_version = 'gradle-5.0'\ngradle_path = f\"/opt/{gradle_version}\"\nif not os.path.exists(gradle_path):\n run(f\"wget -q -nc -O gradle.zip https://services.gradle.org/distributions/{gradle_version}-bin.zip\")\n run('unzip -q -d /opt gradle.zip')\n run('rm -f gradle.zip')\n\n# We're choosing to use the absolute path instead of adding it to the $PATH environment variable.\ndef gradle(args):\n run(f\"{gradle_path}/bin/gradle --console=plain {args}\")\n\ngradle('-v')", "build.gradle\nWe'll also need a build.gradle file which will allow us to invoke some useful commands.", "%%writefile build.gradle\n\nplugins {\n // id 'idea' // Uncomment for IntelliJ IDE\n // id 'eclipse' // Uncomment for Eclipse IDE\n\n // Apply java plugin and make it a runnable application.\n id 'java'\n id 'application'\n\n // 'shadow' allows us to embed all the dependencies into a fat jar.\n id 'com.github.johnrengelman.shadow' version '4.0.3'\n}\n\n// This is the path of the main class, stored within ./src/main/java/\nmainClassName = 'samples.quickstart.WordCount'\n\n// Declare the sources from which to fetch dependencies.\nrepositories {\n mavenCentral()\n}\n\n// Java version compatibility.\nsourceCompatibility = 1.8\ntargetCompatibility = 1.8\n\n// Use the latest Apache Beam major version 2.\n// You can also lock into a minor version like '2.9.+'.\next.apacheBeamVersion = '2.+'\n\n// Declare the dependencies of the project.\ndependencies {\n shadow \"org.apache.beam:beam-sdks-java-core:$apacheBeamVersion\"\n\n runtime \"org.apache.beam:beam-runners-direct-java:$apacheBeamVersion\"\n runtime \"org.slf4j:slf4j-api:1.+\"\n runtime \"org.slf4j:slf4j-jdk14:1.+\"\n\n testCompile \"junit:junit:4.+\"\n}\n\n// Configure 'shadowJar' instead of 'jar' to set up the fat jar.\nshadowJar {\n baseName = 'WordCount' // Name of the fat jar file.\n classifier = null // Set to null, otherwise 'shadow' appends a '-all' to the jar file name.\n manifest {\n attributes('Main-Class': mainClassName) // Specify where the main class resides.\n }\n}", "Creating the directory structure\nJava and Gradle expect a specific directory structure. This helps organize large projects into a standard structure.\nFor now, we only need a place where our quickstart code will reside. That has to go within ./src/main/java/.", "run('mkdir -p src/main/java/samples/quickstart')", "Minimal word count\nThe following example is the \"Hello, World!\" of data processing, a basic implementation of word count. We're creating a simple data processing pipeline that reads a text file and counts the number of occurrences of every word.\nThere are many scenarios where all the data does not fit in memory. Notice that the outputs of the pipeline go to the file system, which allows for large processing jobs in distributed environments.\nWordCount.java", "%%writefile src/main/java/samples/quickstart/WordCount.java\n\npackage samples.quickstart;\n\nimport org.apache.beam.sdk.Pipeline;\nimport org.apache.beam.sdk.io.TextIO;\nimport org.apache.beam.sdk.options.PipelineOptions;\nimport org.apache.beam.sdk.options.PipelineOptionsFactory;\nimport org.apache.beam.sdk.transforms.Count;\nimport org.apache.beam.sdk.transforms.Filter;\nimport org.apache.beam.sdk.transforms.FlatMapElements;\nimport org.apache.beam.sdk.transforms.MapElements;\nimport org.apache.beam.sdk.values.KV;\nimport org.apache.beam.sdk.values.TypeDescriptors;\n\nimport java.util.Arrays;\n\npublic class WordCount {\n public static void main(String[] args) {\n String inputsDir = \"data/*\";\n String outputsPrefix = \"outputs/part\";\n\n PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();\n Pipeline pipeline = Pipeline.create(options);\n pipeline\n .apply(\"Read lines\", TextIO.read().from(inputsDir))\n .apply(\"Find words\", FlatMapElements.into(TypeDescriptors.strings())\n .via((String line) -> Arrays.asList(line.split(\"[^\\\\p{L}]+\"))))\n .apply(\"Filter empty words\", Filter.by((String word) -> !word.isEmpty()))\n .apply(\"Count words\", Count.perElement())\n .apply(\"Write results\", MapElements.into(TypeDescriptors.strings())\n .via((KV<String, Long> wordCount) ->\n wordCount.getKey() + \": \" + wordCount.getValue()))\n .apply(TextIO.write().to(outputsPrefix));\n pipeline.run();\n }\n}", "Build and run\nLet's first check how the final file system structure looks like. These are all the files required to build and run our application.\n\nbuild.gradle - build configuration for Gradle\nsrc/main/java/samples/quickstart/WordCount.java - application source code\ndata/kinglear.txt - input data, this could be any file or files\n\nWe are now ready to build the application using gradle build.", "# Build the project.\ngradle('build')\n\n# Check the generated build files.\nrun('ls -lh build/libs/')", "There are two files generated:\n* The content.jar file, the application generated from the regular build command. It's only a few kilobytes in size.\n* The WordCount.jar file, with the baseName we specified in the shadowJar section of the gradle.build file. It's a several megabytes in size, with all the required libraries it needs to run embedded in it.\nThe file we're actually interested in is the fat JAR file WordCount.jar. To run the fat JAR, we'll use the gradle runShadow command.", "# Run the shadow (fat jar) build.\ngradle('runShadow')\n\n# Sample the first 20 results, remember there are no ordering guarantees.\nrun('head -n 20 outputs/part-00000-of-*')", "Distributing your application\nWe can run our fat JAR file as long as we have a Java Runtime Environment installed.\nTo distribute, we copy the fat JAR file and run it with java -jar.", "# You can now distribute and run your Java application as a standalone jar file.\nrun('cp build/libs/WordCount.jar .')\nrun('java -jar WordCount.jar')\n\n# Sample the first 20 results, remember there are no ordering guarantees.\nrun('head -n 20 outputs/part-00000-of-*')", "Word count with comments\nBelow is mostly the same code as above, but with comments explaining every line in more detail.", "%%writefile src/main/java/samples/quickstart/WordCount.java\n\npackage samples.quickstart;\n\nimport org.apache.beam.sdk.Pipeline;\nimport org.apache.beam.sdk.io.TextIO;\nimport org.apache.beam.sdk.options.PipelineOptions;\nimport org.apache.beam.sdk.options.PipelineOptionsFactory;\nimport org.apache.beam.sdk.transforms.Count;\nimport org.apache.beam.sdk.transforms.Filter;\nimport org.apache.beam.sdk.transforms.FlatMapElements;\nimport org.apache.beam.sdk.transforms.MapElements;\nimport org.apache.beam.sdk.values.KV;\nimport org.apache.beam.sdk.values.PCollection;\nimport org.apache.beam.sdk.values.TypeDescriptors;\n\nimport java.util.Arrays;\n\npublic class WordCount {\n public static void main(String[] args) {\n String inputsDir = \"data/*\";\n String outputsPrefix = \"outputs/part\";\n\n PipelineOptions options = PipelineOptionsFactory.fromArgs(args).create();\n Pipeline pipeline = Pipeline.create(options);\n\n // Store the word counts in a PCollection.\n // Each element is a KeyValue of (word, count) of types KV<String, Long>.\n PCollection<KV<String, Long>> wordCounts =\n // The input PCollection is an empty pipeline.\n pipeline\n\n // Read lines from a text file.\n .apply(\"Read lines\", TextIO.read().from(inputsDir))\n // Element type: String - text line\n\n // Use a regular expression to iterate over all words in the line.\n // FlatMapElements will yield an element for every element in an iterable.\n .apply(\"Find words\", FlatMapElements.into(TypeDescriptors.strings())\n .via((String line) -> Arrays.asList(line.split(\"[^\\\\p{L}]+\"))))\n // Element type: String - word\n\n // Keep only non-empty words.\n .apply(\"Filter empty words\", Filter.by((String word) -> !word.isEmpty()))\n // Element type: String - word\n\n // Count each unique word.\n .apply(\"Count words\", Count.perElement());\n // Element type: KV<String, Long> - key: word, value: counts\n\n // We can process a PCollection through other pipelines, too.\n // The input PCollection are the wordCounts from the previous step.\n wordCounts\n // Format the results into a string so we can write them to a file.\n .apply(\"Write results\", MapElements.into(TypeDescriptors.strings())\n .via((KV<String, Long> wordCount) ->\n wordCount.getKey() + \": \" + wordCount.getValue()))\n // Element type: str - text line\n\n // Finally, write the results to a file.\n .apply(TextIO.write().to(outputsPrefix));\n\n // We have to explicitly run the pipeline, otherwise it's only a definition.\n pipeline.run();\n }\n}\n\n# Build and run the project. The 'runShadow' task implicitly does a 'build'.\ngradle('runShadow')\n\n# Sample the first 20 results, remember there are no ordering guarantees.\nrun('head -n 20 outputs/part-00000-of-*')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/mlops-with-vertex-ai
06-model-deployment.ipynb
apache-2.0
[ "06 - Model Deployment\nThe purpose of this notebook is to execute a CI/CD routine to test and deploy the trained model to Vertex AI as an Endpoint for online prediction serving. The notebook covers the following steps:\n1. Run the test steps locally.\n2. Execute the model deployment CI/CD steps using Cloud Build.\nSetup\nImport libraries", "import os\nimport logging\n\nlogging.getLogger().setLevel(logging.INFO)", "Setup Google Cloud project", "PROJECT = '[your-project-id]' # Change to your project id.\nREGION = 'us-central1' # Change to your region.\n\nif PROJECT == \"\" or PROJECT is None or PROJECT == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT = shell_output[0]\n\nprint(\"Project ID:\", PROJECT)\nprint(\"Region:\", REGION)", "Set configurations", "VERSION = 'v01'\nDATASET_DISPLAY_NAME = 'chicago-taxi-tips'\nMODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}'\nENDPOINT_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier'\n\nCICD_IMAGE_NAME = 'cicd:latest'\nCICD_IMAGE_URI = f\"gcr.io/{PROJECT}/{CICD_IMAGE_NAME}\"", "1. Run CI/CD steps locally", "os.environ['PROJECT'] = PROJECT\nos.environ['REGION'] = REGION\nos.environ['MODEL_DISPLAY_NAME'] = MODEL_DISPLAY_NAME\nos.environ['ENDPOINT_DISPLAY_NAME'] = ENDPOINT_DISPLAY_NAME", "Run the model artifact testing", "!py.test src/tests/model_deployment_tests.py::test_model_artifact -s", "Run create endpoint", "!python build/utils.py \\\n --mode=create-endpoint\\\n --project={PROJECT}\\\n --region={REGION}\\\n --endpoint-display-name={ENDPOINT_DISPLAY_NAME}", "Run deploy model", "!python build/utils.py \\\n --mode=deploy-model\\\n --project={PROJECT}\\\n --region={REGION}\\\n --endpoint-display-name={ENDPOINT_DISPLAY_NAME}\\\n --model-display-name={MODEL_DISPLAY_NAME}", "Test deployed model endpoint", "!py.test src/tests/model_deployment_tests.py::test_model_endpoint", "2. Execute the Model Deployment CI/CD routine in Cloud Build\nThe CI/CD routine is defined in the model-deployment.yaml file, and consists of the following steps:\n1. Load and test the the trained model interface.\n2. Create and endpoint in Vertex AI if it doesn't exists.\n3. Deploy the model to the endpoint.\n4. Test the endpoint.\nBuild CI/CD container Image for Cloud Build\nThis is the runtime environment where the steps of testing and deploying model will be executed.", "!echo $CICD_IMAGE_URI\n\n!gcloud builds submit --tag $CICD_IMAGE_URI build/. --timeout=15m", "Run CI/CD from model deployment using Cloud Build", "REPO_URL = \"https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai.git\" # Change to your github repo.\nBRANCH = \"main\" \n\nSUBSTITUTIONS=f\"\"\"\\\n_REPO_URL='{REPO_URL}',\\\n_BRANCH={BRANCH},\\\n_CICD_IMAGE_URI={CICD_IMAGE_URI},\\\n_PROJECT={PROJECT},\\\n_REGION={REGION},\\\n_MODEL_DISPLAY_NAME={MODEL_DISPLAY_NAME},\\\n_ENDPOINT_DISPLAY_NAME={ENDPOINT_DISPLAY_NAME},\\\n\"\"\"\n\n!echo $SUBSTITUTIONS\n\n!gcloud builds submit --no-source --config build/model-deployment.yaml --substitutions {SUBSTITUTIONS} --timeout=30m" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
takanory/pymook-samplecode
4_scraping/4_2_scraping.ipynb
mit
[ "4.2 サードパーティ製パッケージを使ってスクレイピングに挑戦\n\nRequests http://docs.python-requests.org/\nBeautiful Soup http://www.crummy.com/software/BeautifulSoup/", "import requests\nimport bs4", "RequestsでWebページを取得", "# Requestsでgihyo.jpのページのデータを取得\nimport requests\nr = requests.get('http://gihyo.jp/lifestyle/clip/01/everyday-cat')\nr.status_code # ステータスコードを取得\n\nr.text[:50] # 先頭50文字を取得", "Requestsを使いこなす\n\nconnpass APIリファレンス https://connpass.com/about/api/", "# JSON形式のAPIレスポンスを取得\nr = requests.get('https://connpass.com/api/v1/event/?keyword=python')\ndata = r.json() # JSONをデコードしたデータを取得\nfor event in data['events']:\n print(event['title'])\n\n# 各種HTTPメソッドに対応\npayload = {'key1': 'value1', 'key2': 'value2'}\nr = requests.post('http://httpbin.org/post', data=payload)\nr = requests.put('http://httpbin.org/put', data=payload)\nr = requests.delete('http://httpbin.org/delete')\nr = requests.head('http://httpbin.org/get')\nr = requests.options('http://httpbin.org/get')\n\n# Requestsの便利な使い方\nr = requests.get('http://httpbin.org/get', params=payload)\nr.url\n\nr = requests.get('https://httpbin.org/basic-auth/user/passwd', auth=('user', 'passwd'))\nr.status_code", "httpbin(1): HTTP Client Testing Service https://httpbin.org/\n\nBeautiful Soup 4でWebページを解析", "# Beautiful Soup 4で「技評ねこ部通信」を取得\nimport requests\nfrom bs4 import BeautifulSoup\nr = requests.get('http://gihyo.jp/lifestyle/clip/01/everyday-cat')\nsoup = BeautifulSoup(r.content, 'html.parser')\ntitle = soup.title # titleタグの情報を取得\ntype(title) # オブジェクトの型は Tag 型\n\nprint(title) # タイトルの中身を確認\nprint(title.text) # タイトルの中のテキストを取得\n\n# 技評ねこ部通信の1件分のデータを取得\ndiv = soup.find('div', class_='readingContent01')\nli = div.find('li') # divタグの中の最初のliタグを取得\nprint(li.a['href']) # liタグの中のaタグのhref属性の値を取得\nprint(li.a.text) # aタグの中の文字列を取得\nli.a.text.split(maxsplit=1) # 文字列のsplit()で日付とタイトルに分割\n\n# 技評ねこ部通信の全データを取得\ndiv = soup.find('div', class_='readingContent01')\nfor li in div.find_all('li'): # divタグの中の全liタグを取得\n url = li.a['href']\n date, text = li.a.text.split(maxsplit=1)\n print('{},{},{}'.format(date, text, url))", "Beautiful Soup 4を使いこなす", "# タグの情報を取得する\ndiv = soup.find('div', class_='readingContent01')\ntype(div) # データの型はTag型\n\ndiv.name\n\ndiv['class']\n\ndiv.attrs # 全属性を取得\n\n# さまざまな検索方法\na_tags = soup.find_all('a') # タグ名を指定\nlen(a_tags)\n\nimport re\nfor tag in soup.find_all(re.compile('^b')): # 正規表現で指定\n print(tag.name)\n\nfor tag in soup.find_all(['html', 'title']): # リストで指定\n print(tag.name)\n\n# キーワード引数での属性指定\ntag = soup.find(id='categoryNavigation') # id属性を指定して検索\ntag.name, tag.attrs\n\ntags = soup.find_all(id=True) # id属性があるタグを全て検索\nlen(tags)\n\ndiv = soup.find('div', class_='readingContent01') # class属性はclass_と指定する\ndiv.attrs\n\ndiv = soup.find('div', {'class': 'readingContent01'}) # 辞書形式でも指定できる\ndiv.attrs\n\n# CSSセレクターを使用した検索\nsoup.select('title') # タグ名を指定\n\ntags = soup.select('body a') # body タグの下のaタグ\nlen(a_tags)\n\na_tags = soup.select('p > a') # pタグの直下のaタグ\nlen(a_tags)\n\nsoup.select('body > a') # bodyタグの直下のaタグは存在しない\n\ndiv = soup.select('.readingContent01') # classを指定\ndiv = soup.select('div.readingContent01')\ndiv = soup.select('#categoryNavigation') # idを指定\ndiv = soup.select('div#categoryNavigation')\na_tag = soup.select_one('div > a') # 最初のdivタグ直下のaタグを返す" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Autodesk/molecular-design-toolkit
moldesign/_notebooks/Example 4. HIV Protease bound to an inhibitor.ipynb
apache-2.0
[ "<span style=\"float:right\"><a href=\"http://moldesign.bionano.autodesk.com/\" target=\"_blank\" title=\"About\">About</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href=\"https://github.com/autodesk/molecular-design-toolkit/issues\" target=\"_blank\" title=\"Issues\">Issues</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href=\"http://bionano.autodesk.com/MolecularDesignToolkit/explore.html\" target=\"_blank\" title=\"Tutorials\">Tutorials</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<a href=\"http://autodesk.github.io/molecular-design-toolkit/\" target=\"_blank\" title=\"Documentation\">Documentation</a></span>\n</span>\n\n<br>\n<center><h1>Example 4: The Dynamics of HIV Protease bound to a small molecule </h1> </center>\nThis notebook prepares a co-crystallized protein / small molecule ligand structure from the PDB database and prepares it for molecular dynamics simulation. \n\nAuthor: Aaron Virshup, Autodesk Research<br>\nCreated on: August 9, 2016\nTags: HIV Protease, small molecule, ligand, drug, PDB, MD", "import moldesign as mdt\nimport moldesign.units as u", "Contents\n\n\nI. The crystal structure\nA. Download and visualize\nB. Try assigning a forcefield\n\n\nII. Parameterizing a small molecule\nA. Isolate the ligand\nB. Assign bond orders and hydrogens\nC. Generate forcefield parameters\n\n\nIII. Prepping the protein\nA. Strip waters\nB. Histidine\n\n\nIV. Prep for dynamics\nA. Assign the forcefield\nB. Attach and configure simulation methods\nD. Equilibrate the protein\n\n\n\nI. The crystal structure\nFirst, we'll download and investigate the 3AID crystal structure.\nA. Download and visualize", "protease = mdt.from_pdb('3AID')\nprotease\n\nprotease.draw()", "B. Try assigning a forcefield\nThis structure is not ready for MD - this command will raise a ParameterizationError Exception. After running this calculation, click on the Errors/Warnings tab to see why.", "amber_ff = mdt.forcefields.DefaultAmber()\nnewmol = amber_ff.create_prepped_molecule(protease)", "You should see 3 errors: \n 1. The residue name ARQ not recognized\n 1. Atom HD1 in residue HIS69, chain A was not recognized\n 1. Atom HD1 in residue HIS69, chain B was not recognized\n(There's also a warning about bond distances, but these can be generally be fixed with an energy minimization before running dynamics)\nWe'll start by tackling the small molecule \"ARQ\".\nII. Parameterizing a small molecule\nWe'll use the GAFF (generalized Amber force field) to create force field parameters for the small ligand.\nA. Isolate the ligand\nClick on the ligand to select it, then we'll use that selection to create a new molecule.", "sel = mdt.widgets.ResidueSelector(protease)\nsel\n\ndrugres = mdt.Molecule(sel.selected_residues[0])\ndrugres.draw2d(width=700, show_hydrogens=True)", "B. Assign bond orders and hydrogens\nA PDB file provides only limited information; they often don't provide indicate bond orders, hydrogen locations, or formal charges. These can be added, however, with the add_missing_pdb_data tool:", "drugmol = mdt.tools.set_hybridization_and_saturate(drugres)\ndrugmol.draw(width=500)\n\ndrugmol.draw2d(width=700, show_hydrogens=True)", "C. Generate forcefield parameters\nWe'll next generate forcefield parameters using this ready-to-simulate structure.\nNOTE: for computational speed, we use the gasteiger charge model. This is not advisable for production work! am1-bcc or esp are far likelier to produce sensible results.", "drug_parameters = mdt.create_ff_parameters(drugmol, charges='gasteiger')", "III. Prepping the protein\nSection II. dealt with getting forcefield parameters for an unknown small molecule. Next, we'll prep the other part of the structure.\nA. Strip waters\nWaters in crystal structures are usually stripped from a simulation as artifacts of the crystallization process. Here, we'll remove the waters from the protein structure.", "dehydrated = mdt.Molecule([atom for atom in protease.atoms if atom.residue.type != 'water'])", "B. Histidine\nHistidine is notoriously tricky, because it exists in no less than three different protonation states at biological pH (7.4) - the \"delta-protonated\" form, referred to with residue name HID; the \"epsilon-protonated\" form aka HIE; and the doubly-protonated form HIP, which has a +1 charge. Unfortunately, crystallography isn't usually able to resolve the difference between these three.\nLuckily, these histidines are pretty far from the ligand binding site, so their protonation is unlikely to affect the dynamics. We'll therefore use the guess_histidine_states function to assign a reasonable starting guess.", "mdt.guess_histidine_states(dehydrated)", "IV. Prep for dynamics\nWith these problems fixed, we can succesfully assigne a forcefield and set up the simulation.\nA. Assign the forcefield\nNow that we have parameters for the drug and have dealt with histidine, the forcefield assignment will succeed:", "amber_ff = mdt.forcefields.DefaultAmber()\namber_ff.add_ff(drug_parameters)\nsim_mol = amber_ff.create_prepped_molecule(dehydrated)", "B. Attach and configure simulation methods\nArmed with the forcefield parameters, we can connect an energy model to compute energies and forces, and an integrator to create trajectories:", "sim_mol.set_energy_model(mdt.models.OpenMMPotential, implicit_solvent='obc', cutoff=8.0*u.angstrom)\nsim_mol.set_integrator(mdt.integrators.OpenMMLangevin, timestep=2.0*u.fs)\nsim_mol.configure_methods()", "C. Equilibrate the protein\nThe next series of cells first minimize the crystal structure to remove clashes, then heats the system to 300K.", "mintraj = sim_mol.minimize()\nmintraj.draw()\n\ntraj = sim_mol.run(20*u.ps)\n\nviewer = traj.draw(display=True)\nviewer.autostyle()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
newlawrence/poliastro
docs/source/examples/Natural and artificial perturbations.ipynb
mit
[ "Natural and artificial perturbations", "# Temporary hack, see https://github.com/poliastro/poliastro/issues/281\nfrom IPython.display import HTML\nHTML('<script type=\"text/javascript\" src=\"https://cdnjs.cloudflare.com/ajax/libs/require.js/2.1.10/require.min.js\"></script>')\n\nimport numpy as np\n\nfrom plotly.offline import init_notebook_mode\ninit_notebook_mode(connected=True)\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport functools\n\nimport numpy as np\nfrom astropy import units as u\nfrom astropy.time import Time\nfrom astropy.coordinates import solar_system_ephemeris\n\nfrom poliastro.twobody.propagation import cowell\nfrom poliastro.ephem import build_ephem_interpolant\nfrom poliastro.core.elements import rv2coe\n\nfrom poliastro.core.util import norm\nfrom poliastro.util import time_range\nfrom poliastro.core.perturbations import (\n atmospheric_drag, third_body, J2_perturbation\n)\nfrom poliastro.bodies import Earth, Moon\nfrom poliastro.twobody import Orbit\nfrom poliastro.plotting import OrbitPlotter, plot, OrbitPlotter3D", "Atmospheric drag\nThe poliastro package now has several commonly used natural perturbations. One of them is atmospheric drag! See how one can monitor decay of the near-Earth orbit over time using our new module poliastro.twobody.perturbations!", "R = Earth.R.to(u.km).value\nk = Earth.k.to(u.km**3 / u.s**2).value\n\norbit = Orbit.circular(Earth, 250 * u.km, epoch=Time(0.0, format='jd', scale='tdb'))\n\n# parameters of a body\nC_D = 2.2 # dimentionless (any value would do)\nA = ((np.pi / 4.0) * (u.m**2)).to(u.km**2).value # km^2\nm = 100 # kg\nB = C_D * A / m\n\n# parameters of the atmosphere\nrho0 = Earth.rho0.to(u.kg / u.km**3).value # kg/km^3\nH0 = Earth.H0.to(u.km).value\ntof = (100000 * u.s).to(u.day).value\ntr = time_range(0.0, periods=2000, end=tof, format='jd', scale='tdb')\ncowell_with_ad = functools.partial(cowell, ad=atmospheric_drag,\n R=R, C_D=C_D, A=A, m=m, H0=H0, rho0=rho0)\n\nrr = orbit.sample(tr, method=cowell_with_ad)\n\nplt.ylabel('h(t)')\nplt.xlabel('t, days')\nplt.plot(tr.value, rr.data.norm() - Earth.R)", "Evolution of RAAN due to the J2 perturbation\nWe can also see how the J2 perturbation changes RAAN over time!", "r0 = np.array([-2384.46, 5729.01, 3050.46]) # km\nv0 = np.array([-7.36138, -2.98997, 1.64354]) # km/s\nk = Earth.k.to(u.km**3 / u.s**2).value\n\norbit = Orbit.from_vectors(Earth, r0 * u.km, v0 * u.km / u.s)\n\ntof = (48.0 * u.h).to(u.s).value\nrr, vv = cowell(orbit, np.linspace(0, tof, 2000), ad=J2_perturbation, J2=Earth.J2.value, R=Earth.R.to(u.km).value)\nraans = [rv2coe(k, r, v)[3] for r, v in zip(rr, vv)]\nplt.ylabel('RAAN(t)')\nplt.xlabel('t, s')\nplt.plot(np.linspace(0, tof, 2000), raans)", "3rd body\nApart from time-independent perturbations such as atmospheric drag, J2/J3, we have time-dependend perturbations. Lets's see how Moon changes the orbit of GEO satellite over time!", "# database keeping positions of bodies in Solar system over time\nsolar_system_ephemeris.set('de432s')\n\nj_date = 2454283.0 * u.day # setting the exact event date is important\n\ntof = (60 * u.day).to(u.s).value\n\n# create interpolant of 3rd body coordinates (calling in on every iteration will be just too slow)\nbody_r = build_ephem_interpolant(Moon, 28 * u.day, (j_date, j_date + 60 * u.day), rtol=1e-2)\n\nepoch = Time(j_date, format='jd', scale='tdb')\ninitial = Orbit.from_classical(Earth, 42164.0 * u.km, 0.0001 * u.one, 1 * u.deg, \n 0.0 * u.deg, 0.0 * u.deg, 0.0 * u.rad, epoch=epoch)\n\n# multiply Moon gravity by 400 so that effect is visible :)\ncowell_with_3rdbody = functools.partial(cowell, rtol=1e-6, ad=third_body,\n k_third=400 * Moon.k.to(u.km**3 / u.s**2).value, \n third_body=body_r)\n\ntr = time_range(j_date.value, periods=1000, end=j_date.value + 60, format='jd', scale='tdb')\nrr = initial.sample(tr, method=cowell_with_3rdbody)\n\nframe = OrbitPlotter3D()\n\nframe.set_attractor(Earth)\nframe.plot_trajectory(rr, label='orbit influenced by Moon')\nframe.show()", "Thrusts\nApart from natural perturbations, there are artificial thrusts aimed at intentional change of orbit parameters. One of such changes is simultaineous change of eccenricy and inclination.", "from poliastro.twobody.thrust import change_inc_ecc\n\necc_0, ecc_f = 0.4, 0.0\na = 42164 # km\ninc_0 = 0.0 # rad, baseline\ninc_f = (20.0 * u.deg).to(u.rad).value # rad\nargp = 0.0 # rad, the method is efficient for 0 and 180\nf = 2.4e-6 # km / s2\n\nk = Earth.k.to(u.km**3 / u.s**2).value\ns0 = Orbit.from_classical(\n Earth,\n a * u.km, ecc_0 * u.one, inc_0 * u.deg,\n 0 * u.deg, argp * u.deg, 0 * u.deg,\n epoch=Time(0, format='jd', scale='tdb')\n)\n \na_d, _, _, t_f = change_inc_ecc(s0, ecc_f, inc_f, f)\n\ncowell_with_ad = functools.partial(cowell, rtol=1e-6, ad=a_d)\n\ntr = time_range(0.0, periods=1000, end=(t_f * u.s).to(u.day).value, format='jd', scale='tdb')\nrr = s0.sample(tr, method=cowell_with_ad)\n\nframe = OrbitPlotter3D()\n\nframe.set_attractor(Earth)\nframe.plot_trajectory(rr, label='orbit with artificial thrust')\nframe.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
materialsvirtuallab/matgenb
notebooks/2013-01-01-Units.ipynb
bsd-3-clause
[ "Introduction\nFrom v2.8.0, pymatgen comes with a fairly robust system of managing units. In essence, subclasses of float and numpy array is provided to attach units to any quantity, as well as provide for conversions. These are loaded at the root level of pymatgen and some properties (e.g., atomic masses, final energies) are returned with attached units. This demo provides an outline of some of the capabilities.\nLet's start with some common units, like Energy.", "import pymatgen as mg\n#The constructor is simply the value + a string unit.\ne = mg.Energy(1000, \"Ha\")\n#Let's perform a conversion. Note that when printing, the units are printed as well.\nprint \"{} = {}\".format(e, e.to(\"eV\"))\n#To check what units are supported\nprint \"Supported energy units are {}\".format(e.supported_units)", "Units support all functionality that is supported by floats. Unit combinations are automatically taken care of.", "dist = mg.Length(65, \"mile\")\ntime = mg.Time(30, \"min\")\nspeed = dist / time\nprint \"The speed is {}\".format(speed)\n#Let's do a more sensible unit.\nprint \"The speed is {}\".format(speed.to(\"mile h^-1\"))", "Note that complex units are specified as space-separated powers of units. Powers are specified using \"^\". E.g., \"kg m s^-1\". Only integer powers are supported.\nNow, let's do some basic science.", "g = mg.FloatWithUnit(9.81, \"m s^-2\") #Acceleration due to gravity\nm = mg.Mass(2, \"kg\")\nh = mg.Length(10, \"m\")\nprint \"The force is {}\".format(m * g)\nprint \"The potential energy is force is {}\".format((m * g * h).to(\"J\"))", "Some highly complex conversions are possible with this system. Let's do some made up units. We will also demonstrate pymatgen's internal unit consistency checks.", "made_up = mg.FloatWithUnit(100, \"Ha^3 bohr^-2\")\nprint made_up.to(\"J^3 ang^-2\")\n\ntry:\n made_up.to(\"J^2\")\nexcept mg.UnitError as ex:\n print ex", "For arrays, we have the equivalent EnergyArray, ... and ArrayWithUnit classes. All other functionality remain the same.", "dists = mg.LengthArray([1, 2, 3], \"mile\")\ntimes = mg.TimeArray([0.11, 0.12, 0.23], \"h\")\nprint \"Speeds are {}\".format(dists / times)", "This concludes the tutorial on units in pymatgen." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aboSamoor/compsocial
Word_Tracker/3rd_Yr_Paper/PsychoInfo.ipynb
gpl-3.0
[ "import glob\nfrom io import open\nimport pandas as pd\nfrom pandas import DataFrame as df\nfrom os import path\nimport re\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Merge CSV databases", "from tools import get_psycinfo_database\n\nwords_df = get_psycinfo_database()\n\nwords_df.head()\n\n#words_df.to_csv(\"data/PsycInfo/processed/psychinfo_combined.csv.bz2\", encoding='utf-8',compression='bz2')", "Load PsychINFO unified database", "#psychinfo = pd.read_csv(\"data/PsycInfo/processed/psychinfo_combined.csv.bz2\", encoding='utf-8', compression='bz2')\npsychinfo = words_df", "Term appearance in abstract and title", "abstract_occurrence = []\nfor x,y in psychinfo[[\"Term\", \"Abstract\"]].fillna(\"\").values:\n if x.lower() in y.lower():\n abstract_occurrence.append(1)\n else:\n abstract_occurrence.append(0)\npsychinfo[\"term_in_abstract\"] = abstract_occurrence\n\ntitle_occurrence = []\nfor x,y in psychinfo[[\"Term\", \"Title\"]].fillna(\"\").values:\n if x.lower() in y.lower():\n title_occurrence.append(1)\n else:\n title_occurrence.append(0)\npsychinfo[\"term_in_title\"] = title_occurrence\n\npsychinfo_search = psychinfo.drop('Abstract', 1)\npsychinfo_search = psychinfo_search.drop('Title', 1)\n\nterm_ID = {\"multiculturalism\": 1, \"polyculturalism\": 2, \"cultural pluralism\": 3, \n \"monocultural\": 4, \"monoracial\": 5, \"bicultural\": 6, \n \"biracial\": 7, \"biethnic\": 8, \"interracial\": 9, \n \"multicultural\": 10, \"multiracial\": 11, \"polycultural\": 12, \n \"polyracial\": 13, \"polyethnic\": 14, \"mixed race\": 15, \n \"mixed ethnicity\": 16, \"other race\": 17, \"other ethnicity\": 18}\n\npsychinfo_search[\"term_ID\"] = psychinfo_search.Term.map(term_ID)\n\npsychinfo_search[\"Type of Book\"].value_counts()\n\ntype_of_book = { 'Handbook/Manual': 1, 'Textbook/Study Guide': 2, 'Conference Proceedings': 3,\n 'Reference Book': 2, 'Classic Book': 4,'Handbook/Manual\\n\\nTextbook/Study Guide': 5,\n 'Reference Book\\n\\nTextbook/Study Guide': 5,'Classic Book\\n\\nTextbook/Study Guide': 5,\n 'Handbook/Manual\\n\\nReference Book': 5,'Conference Proceedings\\n\\nTextbook/Study Guide': 5,\n 'Reference Book\\r\\rTextbook/Study Guide': 5,'Conference Proceedings\\r\\rTextbook/Study Guide': 5}\n\npsychinfo_search[\"type_of_book\"] = psychinfo_search[\"Type of Book\"].map(type_of_book)\n\npsychinfo_search[\"cited_references\"] = psychinfo_search['Cited References'].map(lambda text:len(text.strip().split(\"\\n\")),\"ignore\")\n\npsychinfo_search['Document Type'].value_counts()\n\ndocument_type = {'Journal Article': 1, 'Dissertation': 2, 'Chapter': 3, 'Review-Book': 4,\n 'Comment/Reply': 6, 'Editorial': 6, 'Chapter\\n\\nReprint': 3,\n 'Erratum/Correction': 6, 'Review-Media': 6, 'Abstract Collection': 6,\n 'Letter': 6, 'Obituary': 6, 'Chapter\\n\\nComment/Reply': 3, 'Column/Opinion': 6,\n 'Reprint': 5, 'Bibliography': 5, 'Journal Article\\n\\nReprint': 1,\n 'Chapter\\r\\rReprint': 3, 'Chapter\\n\\nJournal Article\\n\\nReprint': 3,\n 'Bibliography\\n\\nChapter': 3, 'Encyclopedia Entry': 5,\n 'Chapter\\r\\rJournal Article\\r\\rReprint': 3, 'Review-Software & Other': 6,\n 'Publication Information': 6, 'Journal Article\\r\\rReprint': 1,\n 'Reprint\\n\\nReview-Book': 4}\n\npsychinfo_search['document_type'] = psychinfo_search['Document Type'].map(document_type)\n\npsychinfo_search[\"conference_dich\"] = psychinfo_search[\"Conference\"].fillna(\"\").map(lambda x: int((len(x) > 0)))\n\n\npsychinfo_search['Publication Type'].value_counts()\n\npublication_type = {'Journal\\n\\nPeer Reviewed Journal': 1, 'Book\\n\\nEdited Book': 3,\n 'Dissertation Abstract': 2, 'Book\\n\\nAuthored Book': 3,\n 'Journal\\r\\rPeer Reviewed Journal': 1, 'Electronic Collection': 1,\n 'Journal\\n\\nPeer-Reviewed Status-Unknown': 1, 'Book\\r\\rEdited Book': 3,\n 'Book': 3, 'Journal\\r\\rPeer-Reviewed Status-Unknown': 1,\n 'Book\\r\\rAuthored Book': 3, 'Encyclopedia': 4}\n\npsychinfo_search['publication_type'] = psychinfo_search['Publication Type'].map(publication_type)\n\n(psychinfo_search[\"publication_type\"] * psychinfo_search[\"conference_dich\"]).value_counts()\n\n\nselection = (psychinfo_search[\"publication_type\"] == 3) * (psychinfo_search[\"conference_dich\"] == 1)\npsychinfo_search[selection][[\"Publication Type\", \"Conference\"]]\n\npsychinfo_search['Language'].value_counts()\n\nlanguage = {'English': 1, 'French': 2, 'Spanish': 3, 'Italian': 4, 'German': 5, 'Portuguese': 6,\n 'Dutch': 7, 'Chinese': 8, 'Greek': 9, 'Hebrew': 10, 'Turkish': 10, 'Russian': 10,\n 'Serbo-Croatian': 10, 'Slovak': 10, 'Japanese': 10, 'Hungarian': 10, 'Czech': 10,\n 'Danish': 10, 'Romanian': 10, 'Polish': 10, 'Norwegian': 10, 'Swedish': 10, 'Finnish': 10,\n 'NonEnglish': 10, 'Arabic': 10, 'Afrikaans': 10}\n\npsychinfo_search['language'] = psychinfo_search['Language'].map(language)\n\n#psychinfo_search[\"PsycINFO Classification Code\"].value_counts().to_csv(\"data/PsycInfo/processed/PsycINFO_Classification_Code.csv\")\n\n#psychinfo_search[\"Tests & Measures\"].value_counts().to_csv(\"data/PsycInfo/processed/Tests_&_Measures.csv\")\n\n#psychinfo_search[\"Key Concepts\"].value_counts().to_csv(\"data/PsycInfo/processed/Key_Concepts.csv\")\n\n#psychinfo_search[\"Location\"].value_counts().to_csv(\"data/PsycInfo/processed/Location.csv\")\n\n#psychinfo_search[\"MeSH Subject Headings\"].value_counts().to_csv(\"data/PsycInfo/processed/MeSH_Subject_Headings.csv\")\n\n#psychinfo_search[\"Journal Name\"].value_counts().to_csv(\"data/PsycInfo/processed/Journal_Name.csv\")\n\n#psychinfo_search[\"Institution\"].value_counts().to_csv(\"data/PsycInfo/processed/Institution.csv\")\n\nlen(psychinfo_search[\"Population Group\"].value_counts())\n\n#psychinfo_search[\"Methodology\"].value_counts()\n\ndef GetCats(text):\n pattern = re.compile(\"([0-9]+)\")\n results = [100*(int(x)//100) for x in pattern.findall(text)]\n if len(set(results))>1:\n return 4300 \n else:\n return results[0] \n\npsychinfo_search[\"PsycINFO_Classification_Code\"] = psychinfo_search[\"PsycINFO Classification Code\"].map(GetCats, \"ignore\")\n\nlists = psychinfo[\"PsycINFO Classification Code\"].map(GetCats, \"ignore\")\nlen(set([x for x in lists.dropna()]))\n#Number of unique categories\n\npsychinfo_search[\"grants_sponsorship\"] = psychinfo_search[\"Grant/Sponsorship\"].fillna(\"\").map(lambda x: int(len(x) > 0))\n\n#psychinfo_search.to_csv(\"data/PsycInfo/processed/psychinfo_term_search.csv.bz2\", encoding='utf-8', compression='bz2')\n\n#psychinfo_search = psychinfo_search.drop('Title', 1)\n\n#psychinfo_search[\"Methodology\"].value_counts().to_csv(\"data/PsycInfo/Manual_Mapping/Methodology.csv\")\n\n#psychinfo_search[\"Population Group\"].value_counts().to_csv(\"data/PsycInfo/Manual_Mapping/Population_Group.csv\")", "PsycINFO Tasks\nKeep the current spreadsheet and add the following: \n1. ~~Add Term in Abstract to spreadsheet~~ (word co-occurrence and control for the length of the abstract--lambda(len(abstract)) )do this for NSF/NIH data as well\n1. ~~Add Term in Title to spreadsheet~~\n1. ~~Copy the word data into a new column (title it 'terms')--> code them as the following: 1 = multiculturalism, 2 = polyculturalism, 3 = cultural pluralism, 4 = monocultural, 5 = monoracial, 6 = bicultural, 7 = biracial, 8 = biethnic, 9 = interracial, 10 = multicultural, 11 = multiracial, 12 = polycultural, 13 = polyracial, 14 = polyethnic, 15 = mixed race, 16 = mixed ethnicity, 17 = other race, 18 = other ethnicity~~\n1. Search all options in set for the following categories: -- I will manually categorize them once you give all options in each set\n 1. ~~\"Type of Book\"~~\n 1. ~~\"PsycINFO Classification Code\"~~\n ~~1. (used the classification codes[recoded to most basic category levels] -- subcategories \n created by PsycInfo (22)-- multiple categories = 4300)~~\n 1. ~~\"Document Type\"~~\n 1. ~~\"Grant/Scholarship\"~~ \n 1. ~~(create a dichotomized variable 0/1)~~\n 1. ~~\"Tests & Measures\"--> csv (no longer necessary)~~\n 1. ~~(Too many categories---needs to be reviewed manually/carefully in excel)~~\n 1. ~~\"Publication Type\"~~\n 1. ~~\"Publication Status\"~~\n 1. \"Population Group\" \n 1. (Need to be mapped manually and then recategorized)\n 1. We need: gender, age (abstract, years)\n 1. \"Methodology\"\n 1. (can make specific methods dichotomous--may remove if unnecessary)\n 1. \"Conference\" \n 1. ~~Right now, this is text (~699 entries)--> dichotomize variable.~~ \n ~~If it is a conference ie there is a text = 1, if there is NaN = 0.~~\n 1. Then, I will incorporate this as a new category in \"Publication Type\" and remove this column).??? [not currently included as a category--overlaps with category 3 in Publication Type = Books]\n 1. \"Key Concepts\"--> csv \n 1. (word co-occurrence)\n 1. \"Location\"-->csv--> sent to Barbara\n 1. (categorized by region--multiple regions)\n 1. ~~\"Language\"~~\n ~~1. I am not sure about my \"other\" language (10) category -- I put everything with less \n than 10 entries into one category.~~\n 1. \"MeSH Subject Headings\"--> csv (may no longer be necessary?)\n 1. (word co-occurrence)\n 1. \"Journal Name\"-->csv--> sent to Jian Xin\n 1. (categorized by H-index in 2014)\n 1. \"Institution\"-->csv --> sent to Barbara\n 1. (categorized by state, region & country)\n1. ~~Count the number of cited references for each entry~~\n***Once we extract the csv files for these columns, I will categorize them. \nOnce all of these corrections have been made, make a new spreadsheet and delete the following information: \n1. Volume\n1. Publisher\n1. Accession Number\n1. Author(s) \n1. Issue\n1. Cited References\n1. Publication Status (had no variance)--only first posting\n1. Document Type???", "len(psychinfo_search[\"Population Group\"].value_counts())" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
FedericoMuciaccia/SistemiComplessi
src/heatmap_and_range.ipynb
mit
[ "import numpy\nimport pandas\n\nimport matplotlib\nfrom matplotlib import pyplot\n%matplotlib inline\n\nimport scipy\nfrom scipy import stats # TODO vedere perché non fa chiamare il modulo direttamente\n\nimport gmaps", "Creazione della mappa\ninvece che uno scatterplot con dei raggi, la libreria ci consente solo di fare una heatmap (eventualmente pesata)", "roma = pandas.read_csv(\"../data/Roma_towers.csv\")\ncoordinate = roma[['lat', 'lon']].values\n\nheatmap = gmaps.heatmap(coordinate)\ngmaps.display(heatmap)\n\n# TODO scrivere che dietro queste due semplici linee ci sta un pomeriggio intero di smadonnamenti\n\ncolosseo = (41.890183, 12.492369)\n\nimport gmplot\nfrom gmplot import GoogleMapPlotter\n\n# gmap = gmplot.from_geocode(\"San Francisco\")\n\nmappa = gmplot.GoogleMapPlotter(41.890183, 12.492369, 11)\n\n#gmap.plot(latitudes, longitudes, 'cornflowerblue', edge_width=10)\n#gmap.plot((41.890183, 41.891183), (12.492369, 12.493369), 'cornflowerblue', edge_width=10)\n#gmap.scatter(more_lats, more_lngs, '#3B0B39', size=40, marker=False)\n#gmap.scatter(marker_lats, marker_lngs, 'k', marker=True)\n#gmap.heatmap(heat_lats, heat_lngs)\n\n#mappa.scatter((41.890183, 41.891183), (12.492369, 12.493369), color='#3B0B39', size=40, marker=False)\n\n#mappa.scatter(roma.lat.values,\n# roma.lon.values,\n# color='#3333ff',\n# size=0,\n# marker=False)\n\nmappa.heatmap(roma.lat.values,roma.lon.values)\n\nmappa.draw(\"../html/heatmap.html\")\n#print a", "NOTE guardando la mappa\nSembrano esserci dei problemi con la posizione delle antenne: ci sono antenne sul tevere, su ponte Sisto, dentro il parchetto di Castel Sant'Angelo, in mezzo al pratone della Sapienza, in cima al dipartimento di Fisica...\nInoltre sembra esserci una strana clusterizzazione lungo le vie di traffico principali. Questo è ragionevole nell'ottica di garantire la copertura in una città con grossi flussi turistici come Roma, ma probabilmente non a tal punto da rendere plausibile la presenza di 7 antenne attorno a piazza Panteon. Ci sono anche coppie di antenne isolate che sembrano distare tra loro pochi metri. Probabilmente sono artefatti di ricostruzione.\nProbabilmente l'algoitmo di ricostruzione di Mozilla ha diversi problemi. Se questa è la situazione delle antenne non oso pensare alla situazione dei router wifi.\nQueste misure e queste ricostruzioni devono essere precise, perché è su queste che si poggerà il loro futuro servizio di geolocalizzazione.\nBisognerebbe farglielo presente (magari ci prendono a lavorare da loro :-) )\nAnalisi del raggio di copertura delle antenne\ndato che ci servirà fare un grafico con scale logaritmiche teniamo solo i dati con\n\nrange =! 0", "\n# condizioni di filtro\nraggioMin = 1\n# raggioMax = 1000\nraggiPositivi = roma.range >= raggioMin\n# raggiCorti = roma.range < raggioMax\n\n# query con le condizioni\n#romaFiltrato = roma[raggiPositivi & raggiCorti]\nromaFiltrato = roma[raggiPositivi]\nraggi = romaFiltrato.range\n\nprint max(raggi)\n\n\n\n# logaritmic (base 2) binning in log-log (base 10) plots of integer histograms\n\ndef logBinnedHist(histogramResults):\n \"\"\"\n histogramResults = numpy.histogram(...)\n OR matplotlib.pyplot.hist(...)\n \n returns x, y\n to be used with matplotlib.pyplot.step(x, y, where='post')\n \"\"\"\n \n # TODO così funziona solo con l'istogramma di pyplot;\n # quello di numpy restituisce solo la tupla (values, binEdges)\n values, binEdges, others = histogramResults\n \n # print binEdges\n \n # TODO\n # if 0 in binEdges:\n # return \"error: log2(0) = ?\"\n \n # print len(values), len(binEdges)\n \n # print binEdges # TODO vedere quando non si parte da 1\n \n # int arrotonda all'intero inferiore\n linMin = min(binEdges)\n linMax = max(binEdges)\n \n # print linMin, linMax\n \n logStart = int(numpy.log2(linMin))\n logStop = int(numpy.log2(linMax))\n \n # print logStart, logStop\n \n nLogBins = logStop - logStart + 1\n \n # print nLogBins\n \n logBins = numpy.logspace(logStart, logStop, num=nLogBins, base=2, dtype=int)\n # print logBins\n \n # 1,2,4,8,16,32,64,128,256,512,1024\n \n ######################\n \n linStart = 2**logStop + 1\n linStop = linMax\n \n # print linStart, linStop\n \n nLinBins = linStop - linStart + 1\n \n # print nLinBins\n \n linBins = numpy.linspace(linStart, linStop, num=nLinBins, dtype=int)\n \n # print linBins\n \n ######################\n \n bins = numpy.append(logBins, linBins)\n \n # print bins\n \n # print len(bins)\n \n \n \n \n # TODO rendere generale questa funzione!!!\n totalValues, binEdges, otherBinNumbers = scipy.stats.binned_statistic(raggi.values,\n raggi.values,\n statistic='count',\n bins=bins)\n \n # print totalValues\n # print len(totalValues)\n \n # uso le proprietà dei logaritmi in base 2:\n # 2^(n+1) - 2^n = 2^n\n correzioniDatiCanalizzatiLog = numpy.delete(logBins, -1)\n \n # print correzioniDatiCanalizzatiLog\n \n # print len(correzioniDatiCanalizzatiLog)\n \n correzioniDatiCanalizzatiLin = numpy.ones(nLinBins, dtype=int)\n \n # print correzioniDatiCanalizzatiLin\n \n # print len(correzioniDatiCanalizzatiLin)\n \n correzioniDatiCanalizzati = numpy.append(correzioniDatiCanalizzatiLog, correzioniDatiCanalizzatiLin)\n \n # print correzioniDatiCanalizzati\n \n # print len(correzioniDatiCanalizzati)\n \n \n \n \n x = numpy.concatenate(([0], bins))\n conteggi = totalValues/correzioniDatiCanalizzati\n \n # TODO caso speciale per il grafico di sotto\n # (per non fare vedere la parte oltre l'ultima potenza di 2)\n l = len(correzioniDatiCanalizzatiLin)\n conteggi[-l:] = numpy.zeros(l, dtype='int')\n \n y = numpy.concatenate(([0], conteggi, [0]))\n \n return x, y\n\n\n\n# creazione di un istogramma log-log per la distribuzione del raggio di copertura\n\n# TODO provare a raggruppare le code\n# esempio: con bins=100\n# oppure con canalizzazione a logaritmo di 2, ma mediato\n# in modo che venga equispaziato nel grafico logaritmico\n# il programma vuole pesati i dati e non i canali\n# si potrebbe implementare una mappa che pesa i dati\n# secondo la funzione divisione intera per logaritmo di 2\n# TODO mettere cerchietto che indica il range massimo oppure scritta in rosso \"20341 m!\"\n# TODO spiegare perché ci sono così tanti conteggi a 1,2,4,... metri\n# TODO ricavare il range dai dati grezzi, facendo un algoritmo di clustering\n# sulle varie osservazioni delle antenne. machine learning?\n# TODO scrivere funzione che fa grafici logaritmici con canali\n# equispaziati nel plot logaritmico (canali pesati)\n\n# impostazioni plot complessivo\n# pyplot.figure(figsize=(20,8)) # dimensioni in pollici\npyplot.figure(figsize=(10,10))\nmatplotlib.pyplot.xlim(10**0,10**5)\nmatplotlib.pyplot.ylim(10**-3,10**2)\npyplot.title('Distribuzione del raggio di copertura')\npyplot.ylabel(\"Numero di antenne\")\npyplot.xlabel(\"Copertura [m]\")\n# pyplot.gca().set_xscale(\"log\")\n# pyplot.gca().set_yscale(\"log\")\npyplot.xscale(\"log\")\npyplot.yscale(\"log\")\n\n# lin binning\ndistribuzioneRange = pyplot.hist(raggi.values,\n bins=max(raggi)-min(raggi),\n histtype='step',\n color='#3385ff',\n label='linear binning')\n\n# log_2 binning\nxLog2, yLog2 = logBinnedHist(distribuzioneRange)\nmatplotlib.pyplot.step(xLog2, yLog2, where='post', color='#ff3300', linewidth=2, label='log_2 weighted binning') #where = mid OR post\n# matplotlib.pyplot.plot(xLog2, yLog2)\n\n# linea verticale ad indicare il massimo grado\npyplot.axvline(x=max(raggi), color='#808080', linestyle='dotted', label='max range (41832m)')\n\n# legenda e salvataggio\npyplot.legend(loc='lower left', frameon=False)\npyplot.savefig('../img/range/infinite_log_binning.svg', format='svg', dpi=600, transparent=True)\n\n", "Frequency-rank", "# istogramma sugli interi\nunique, counts = numpy.unique(raggi.values, return_counts=True)\n# print numpy.asarray((unique, counts)).T\nrank = numpy.arange(1,len(unique)+1)\nfrequency = numpy.array(sorted(counts, reverse=True))\n\npyplot.figure(figsize=(20,10))\npyplot.title('Distribuzione del raggio di copertura')\npyplot.ylabel(\"Numero di antenne\")\npyplot.xlabel(\"Copertura [m] o ranking\")\npyplot.xscale(\"log\")\npyplot.yscale(\"log\")\nmatplotlib.pyplot.xlim(10**0,10**4)\nmatplotlib.pyplot.ylim(10**0,10**2)\n\nmatplotlib.pyplot.step(x=rank, y=frequency, where='post', label='frequency-rank', color='#00cc44')\n\nmatplotlib.pyplot.scatter(x=unique, y=counts, marker='o', color='#3385ff', label='linear binning (scatter)')\nmatplotlib.pyplot.step(xLog2, yLog2, where='post', color='#ff3300', label='log_2 weighted binning')\n\npyplot.legend(loc='lower left', frameon=False)\npyplot.savefig('../img/range/range_distribution.svg', format='svg', dpi=600, transparent=True)\n\n", "Cumulative histogram\nthe cumulative distribution function cdf(x) is the probability that a real-valued random variable X will take a value less than or equal to x", "conteggi, binEdges = numpy.histogram(raggi.values,\n bins=max(raggi)-min(raggi))\nconteggiCumulativi = numpy.cumsum(conteggi)\nvaloriRaggi = numpy.delete(binEdges, -1)\nN = len(raggi.values)\n\n\npyplot.figure(figsize=(12,10))\npyplot.title('Raggio di copertura')\npyplot.ylabel(\"Numero di antenne\")\npyplot.xlabel(\"Copertura [m]\")\npyplot.xscale(\"log\")\npyplot.yscale(\"log\")\nmatplotlib.pyplot.xlim(10**0,10**5)\nmatplotlib.pyplot.ylim(10**0,10**4)\n\nmatplotlib.pyplot.step(x=valoriRaggi, y=conteggiCumulativi, where='post', label='Cumulata', color='#009999')\nmatplotlib.pyplot.step(x=valoriRaggi, y=N-conteggiCumulativi, where='post', label='N - Cumulata', color='#ff0066')\n\npyplot.axhline(y=N, color='#808080', linestyle='dotted', label='N_max = 6505')\n\npyplot.legend(loc='lower left', frameon=False)\npyplot.savefig('../img/range/range_cumulated_distribution.svg', format='svg', dpi=600, transparent=True)\n\n\n# TODO fare fit a mano e controllare le relazioni tra i vari esponenti" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NREL/bifacial_radiance
docs/tutorials/9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research documentation).ipynb
bsd-3-clause
[ "9 - Advanced topics - 1 axis torque tube Shading for 1 day (Research Documentation)\nRecreating JPV 2019 / PVSC 2018 Fig. 13\nCalculating and plotting shading from torque tube on 1-axis tracking for 1 day, which is figure 13 in: \n Ayala Pelaez S, Deline C, Greenberg P, Stein JS, Kostuk RK. Model and validation of single-axis tracking with bifacial PV. IEEE J Photovoltaics. 2019;9(3):715–21. https://ieeexplore.ieee.org/document/8644027 and https://www.nrel.gov/docs/fy19osti/72039.pdf (pre-print, conference version)\n\nThis is what we will re-create:\n\nUse bifacial_radiance minimum v. 0.3.1 or higher. Many things have been updated since this paper, simplifying the generation of this plot:\n<ul>\n <li> Sensor position is now always generated E to W on N-S tracking systems, so same sensor positions can just be added for this calculation at the end without needing to flip the sensors. </li>\n <li> Torquetubes get automatically generated in makeModule. Following PVSC 2018 paper, rotation is around the modules and not around the torque tube axis (which is a new feature) </li>\n <li> Simulating only 1 day on single-axis tracking easier with cumulativesky = False and gendaylit1axis(startdate='06/24', enddate='06/24' </li> \n <li> Sensors get generated very close to surface, so all results are from the module surface and not the torquetube for this 1-UP case. </li>\n</ul>\n\nSteps:\n<ol>\n <li> <a href='#step1'> Running the simulations for all the cases: </li>\n <ol type='A'> \n <li> <a href='#step1a'>Baseline Case: No Torque Tube </a></li>\n <li> <a href='#step1b'> Zgap = 0.1 </a></li>\n <li> <a href='#step1c'> Zgap = 0.2 </a></li>\n <li> <a href='#step1d'> Zgap = 0.3 </a></li>\n </ol>\n <li> <a href='#step2'> Read-back the values and tabulate average values for unshaded, 10cm gap and 30cm gap </a></li>\n <li> <a href='#step3'> Plot spatial loss values for 10cm and 30cm data </a></li>\n <li> <a href='#step4'> Overall Shading Factor (for 1 day) </a></li>\n</ol>\n\n<a id='step1'></a>\n1. Running the simulations for all the cases", "import os\nfrom pathlib import Path\n\ntestfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_09')\nif not os.path.exists(testfolder):\n os.makedirs(testfolder)\n\nprint (\"Your simulation will be stored in %s\" % testfolder)\n\n# VARIABLES of the simulation: \nlat = 35.1 # ABQ\nlon = -106.7 # ABQ\nx=1\ny = 2 \nnumpanels=1\nlimit_angle = 45 # tracker rotation limit angle\nbacktrack = True\nalbedo = 'concrete' # ground albedo\nhub_height = y*0.75 # H = 0.75 \ngcr = 0.35 \npitch = y/gcr\n#pitch = 1.0/gcr # Check from 1Axis_Shading_PVSC2018 file\ncumulativesky = False # needed for set1axis and makeScene1axis so simulation is done hourly not with gencumsky.\nlimit_angle = 45 # tracker rotation limit angle\nnMods=10\nnRows=3\nsensorsy = 200\nmodule_type='test-module'\ndatewanted='06_24' # sunny day 6/24/1972 (index 4180 - 4195). Valid formats starting version 0.4.0 for full day sim: mm_dd\n\n## Torque tube info\ntubetype='round'\nmaterial = 'Metal_Grey'\ndiameter = 0.1\naxisofrotationTorqueTube = False # Original PVSC version rotated around the modules like most other software.\n# Variables that will get defined on each iteration below:\nzgap = 0 # 0.2, 0.3 values tested. Re-defined on each simulation.\nvisible = False # baseline is no torque tube.\n\n\n# Simulation Start.\nimport bifacial_radiance\nimport numpy as np\n\nprint(bifacial_radiance.__version__)\n\ndemo = bifacial_radiance.RadianceObj(path = testfolder) \ndemo.setGround(albedo)\nepwfile = demo.getEPW(lat, lon) \nmetdata = demo.readWeatherFile(epwfile, starttime=datewanted, endtime=datewanted) \ntrackerdict = demo.set1axis(metdata, limit_angle = limit_angle, backtrack = backtrack, gcr = gcr, cumulativesky = cumulativesky)\ntrackerdict = demo.gendaylit1axis() \nsceneDict = {'pitch':pitch,'hub_height':hub_height, 'nMods': nMods, 'nRows': nRows} ", "<a id='step1a'></a>\nA. Baseline Case: No Torque Tube\nWhen torquetube is False, zgap is the distance from axis of torque tube to module surface, but since we are rotating from the module's axis, this Zgap doesn't matter for this baseline case.", "#CASE 0 No torque tube\n# When torquetube is False, zgap is the distance from axis of torque tube to module surface, but since we are rotating from the module's axis, this Zgap doesn't matter.\n# zgap = 0.1 + diameter/2.0 \ntorquetube = False \ncustomname = '_NoTT'\nmodule_NoTT = demo.makeModule(name=customname,x=x,y=y, numpanels=numpanels)\nmodule_NoTT.addTorquetube(visible=False, axisofrotation=False, diameter=0)\ntrackerdict = demo.makeScene1axis(trackerdict, module_NoTT, sceneDict, cumulativesky = cumulativesky) \ntrackerdict = demo.makeOct1axis(trackerdict)\ntrackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)\n", "<a id='step1b'></a>\nB. ZGAP = 0.1", "#ZGAP 0.1 \nzgap = 0.1\ncustomname = '_zgap0.1'\ntubeParams = {'tubetype':tubetype,\n 'diameter':diameter,\n 'material':material,\n 'axisofrotation':False,\n 'visible':True} # either pass this into makeModule, or separately into module.addTorquetube()\nmodule_zgap01 = demo.makeModule(name=customname, x=x,y=y, numpanels=numpanels, zgap=zgap, tubeParams=tubeParams)\ntrackerdict = demo.makeScene1axis(trackerdict, module_zgap01, sceneDict, cumulativesky = cumulativesky) \ntrackerdict = demo.makeOct1axis(trackerdict)\ntrackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)", "<a id='step1c'></a>\nC. ZGAP = 0.2", "#ZGAP 0.2\nzgap = 0.2\ncustomname = '_zgap0.2'\ntubeParams = {'tubetype':tubetype,\n 'diameter':diameter,\n 'material':material,\n 'axisofrotation':False,\n 'visible':True} # either pass this into makeModule, or separately into module.addTorquetube()\nmodule_zgap02 = demo.makeModule(name=customname, x=x,y=y, numpanels=numpanels,zgap=zgap, tubeParams=tubeParams)\ntrackerdict = demo.makeScene1axis(trackerdict, module_zgap02, sceneDict, cumulativesky = cumulativesky) \ntrackerdict = demo.makeOct1axis(trackerdict)\ntrackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)", "<a id='step1d'></a>\nD. ZGAP = 0.3", "#ZGAP 0.3\nzgap = 0.3\ncustomname = '_zgap0.3'\ntubeParams = {'tubetype':tubetype,\n 'diameter':diameter,\n 'material':material,\n 'axisofrotation':False,\n 'visible':True} # either pass this into makeModule, or separately into module.addTorquetube()\nmodule_zgap03 = demo.makeModule(name=customname,x=x,y=y, numpanels=numpanels, zgap=zgap, tubeParams=tubeParams)\ntrackerdict = demo.makeScene1axis(trackerdict, module_zgap03, sceneDict, cumulativesky = cumulativesky) \ntrackerdict = demo.makeOct1axis(trackerdict)\ntrackerdict = demo.analysis1axis(trackerdict, sensorsy = sensorsy, customname = customname)", "<a id='step2'></a>\n2. Read-back the values and tabulate average values for unshaded, 10cm gap and 30cm gap", "import glob\nimport pandas as pd\n\nresultsfolder = os.path.join(testfolder, 'results')\nprint (resultsfolder)\nfilenames = glob.glob(os.path.join(resultsfolder,'*.csv'))\nnoTTlist = [k for k in filenames if 'NoTT' in k]\nzgap10cmlist = [k for k in filenames if 'zgap0.1' in k]\nzgap20cmlist = [k for k in filenames if 'zgap0.2' in k]\nzgap30cmlist = [k for k in filenames if 'zgap0.3' in k]\n\n# sum across all hours for each case\nunsh_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in noTTlist]).sum(axis = 0)\ncm10_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap10cmlist]).sum(axis = 0)\ncm20_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap20cmlist]).sum(axis = 0)\ncm30_front = np.array([pd.read_csv(f, engine='python')['Wm2Front'] for f in zgap30cmlist]).sum(axis = 0)\nunsh_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in noTTlist]).sum(axis = 0)\ncm10_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap10cmlist]).sum(axis = 0)\ncm20_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap20cmlist]).sum(axis = 0)\ncm30_back = np.array([pd.read_csv(f, engine='python')['Wm2Back'] for f in zgap30cmlist]).sum(axis = 0)", "<a id='step3'></a>\n3. plot spatial loss values for 10cm and 30cm data", "import matplotlib.pyplot as plt\nplt.rcParams['font.family'] = 'sans-serif'\nplt.rcParams['font.sans-serif'] = ['Helvetica']\nplt.rcParams['axes.linewidth'] = 0.2 #set the value globally\n\nfig = plt.figure()\nfig.set_size_inches(4, 2.5)\nax = fig.add_axes((0.15,0.15,0.78,0.75))\n#plt.rc('font', family='sans-serif')\nplt.rc('xtick',labelsize=8)\nplt.rc('ytick',labelsize=8)\nplt.rc('axes',labelsize=8)\nplt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm30_back - unsh_back)/unsh_back*100, label = '30cm gap',color = 'black') #steelblue\nplt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm20_back - unsh_back)/unsh_back*100, label = '20cm gap',color = 'steelblue', linestyle = '--') #steelblue\nplt.plot(np.linspace(-1,1,unsh_back.__len__()),(cm10_back - unsh_back)/unsh_back*100, label = '10cm gap',color = 'darkorange') #steelblue\n#plt.ylabel('$G_{rear}$ vs unshaded [Wm-2]')#(r'$BG_E$ [%]')\nplt.ylabel('$G_{rear}$ / $G_{rear,tubeless}$ -1 [%]')\nplt.xlabel('Module X position [m]')\nplt.legend(fontsize = 8,frameon = False,loc='best')\n#plt.ylim([0, 15])\nplt.title('Torque tube shading loss',fontsize=9)\n#plt.annotate('South',xy=(-10,9.5),fontsize = 8); plt.annotate('North',xy=(8,9.5),fontsize = 8)\nplt.show()", "<a id='step4'></a>\n4. Overall Shading Loss Factor\nTo calculate shading loss factor, we can use the following equation:\n<img src=\"../images_wiki/AdvancedJournals/Equation_ShadingFactor.PNG\">", "ShadingFactor = (1 - cm30_back.sum() / unsh_back.sum())*100" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jswhit/py-ncepbufr
test/Python_tutorial_bufr.ipynb
isc
[ "%matplotlib inline \n", "Reading an NCEP BUFR data set\nNCEP BUFR (Binary Universal Form for the Representation of meteorological data) can be read two ways:\n\n\nFortran code with BUFRLIB\n\n\npy-ncepbufr, which is basically Python wrappers around BUFRLIB\n\n\nIn this example we'll use py-ncepbufr to read a snapshot of the Argo data tank from WCOSS, show how to navigate the BUFR structure, and how to extract and plot a profile.\nThe py-ncepbufr library and installation instructions can be found at\nhttps://github.com/JCSDA/py-ncepbufr\n\nWe begin by importing the required libraries.", "import matplotlib.pyplot as plt # graphics library\nimport numpy as np\nimport ncepbufr # python wrappers around BUFRLIB", "For the purposes of this demo I've made a local copy of the Argo data tank on WCOSS \nlocated at\n\n/dcom/us007003/201808/b031/xx005 \n\nBegin by opening the file", "bufr = ncepbufr.open('data/xx005')", "Movement and data access within the BUFR file is through these methods:\nbufr.advance()\nbufr.load_subset()\nbufr.read_subset()\nbufr.rewind()\nbufr.close()\n\nThere is a lot more functionality to ncepbufr, such as searching on multiple mnenomics, printing or saving the BUFR table included in the file, printing or saving the inventory and subsets, setting and using checkpoints in the file. See the ncepbufr help for more details.\n\n\nImportant Note: py-ncepbufr is unforgiving of mistakes. A BUFRLIB fortran error will result in an immediate exit from the Python interpreter.", "# move down to first message - a return code of 0 indicates success\nbufr.advance() \n\n# load the message subset -- a return code of 0 indicates success\nbufr.load_subset() ", "You can print the subset and determine the parameter names. BUFR dumps can be very verbose, so I'll just copy in the header and the first subset replication from a bufr.dump_subset() command.\n\n\nI've highlighted in red the parameters I want to plot.\n\n\n<pre style=\"font-size: x-small\">\nMESSAGE TYPE NC031005 \n\n004001 YEAR 2018.0 YEAR YEAR \n004002 MNTH 8.0 MONTH MONTH \n004003 DAYS 1.0 DAY DAY \n004004 HOUR 0.0 HOUR HOUR \n004005 MINU 16.0 MINUTE MINUTE \n035195 SEQNUM 317 ( 4)CCITT IA5 CHANNEL SEQUENCE NUMBER \n035021 BUHD IOPX01 ( 6)CCITT IA5 BULLETIN BEING MONITORED (TTAAii) \n035023 BORG KWBC ( 4)CCITT IA5 BULLETIN BEING MONITORED (CCCC) \n035022 BULTIM 010029 ( 6)CCITT IA5 BULLETIN BEING MONITORED (YYGGgg) \n035194 BBB MISSING ( 6)CCITT IA5 BULLETIN BEING MONITORED (BBB) \n008202 RCTS 0.0 CODE TABLE RECEIPT TIME SIGNIFICANCE \n004200 RCYR 2018.0 YEAR YEAR - TIME OF RECEIPT \n004201 RCMO 8.0 MONTH MONTH - TIME OF RECEIPT \n004202 RCDY 1.0 DAY DAY - TIME OF RECEIPT \n004203 RCHR 0.0 HOUR HOUR - TIME OF RECEIPT \n004204 RCMI 31.0 MINUTE MINUTE - TIME OF RECEIPT \n033215 CORN 0.0 CODE TABLE CORRECTED REPORT INDICATOR \n001087 WMOP 6903327.0 NUMERIC WMO marine observing platform extended identifie\n001085 OPMM S2-X (20)CCITT IA5 Observing platform manufacturer's model \n001086 OPMS 10151 ( 32)CCITT IA5 Observing platform manufacturer's serial number \n002036 BUYTS 2.0 CODE TABLE Buoy type \n002148 DCLS 8.0 CODE TABLE Data collection and/or location system \n002149 BUYT 14.0 CODE TABLE Type of data buoy \n022055 FCYN 28.0 NUMERIC Float cycle number \n022056 DIPR 0.0 CODE TABLE Direction of profile \n022067 IWTEMP 846.0 CODE TABLE INSTRUMENT TYPE FOR WATER TEMPERATURE PROFILE ME\n005001 CLATH 59.34223 DEGREES LATITUDE (HIGH ACCURACY) \n006001 CLONH -9.45180 DEGREES LONGITUDE (HIGH ACCURACY) \n008080 QFQF 20.0 CODE TABLE Qualifier for GTSPP quality flag \n033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag \n (GLPFDATA) 636 REPLICATIONS\n ++++++ GLPFDATA REPLICATION # 1 ++++++\n<span style=\"color: red\">007065 WPRES 10000.0 PA Water pressure</span>\n008080 QFQF 10.0 CODE TABLE Qualifier for GTSPP quality flag \n033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag \n<span style=\"color: red\">022045 SSTH 285.683 K Sea/water temperature</span>\n008080 QFQF 11.0 CODE TABLE Qualifier for GTSPP quality flag \n033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag\n<span style=\"color: red\">022064 SALNH 35.164 PART PER THOUSAND Salinity</span>\n008080 QFQF 12.0 CODE TABLE Qualifier for GTSPP quality flag \n033050 GGQF 1.0 CODE TABLE Global GTSPP quality flag \n</pre>\n\n\nNow we can load the data for plotting", "temp = bufr.read_subset('SSTH').squeeze()-273.15 # convert from Kelvin to Celsius\nsal = bufr.read_subset('SALNH').squeeze()\ndepth = bufr.read_subset('WPRES').squeeze()/10000. # convert from Pa to depth in meters\n# observation location, date, and receipt time\nlon = bufr.read_subset('CLONH')[0][0]\nlat = bufr.read_subset('CLATH')[0][0]\ndate = bufr.msg_date\nreceipt = bufr.receipt_time\nbufr.close()", "Set up the plotting figure. But this time, just for fun, let's put both the temperature and salinity profiles on the same axes. This trick uses both the top and bottom axis for different parameters.\n\n\nAs these are depth profiles, we need twin x-axes and a shared y-axis for the depth.", "fig = plt.figure(figsize = (5,4))\nax1 = plt.axes()\nax1.plot(temp, depth,'r-')\nax1.grid(axis = 'y')\nax1.invert_yaxis() # flip the y-axis for ocean depths\nax2 = ax1.twiny() # here's the second x-axis definition\nax2.plot(np.nan, 'r-', label = 'Temperature')\nax2.plot(sal, depth, 'b-', label = 'Salinity')\nax2.legend()\nax1.set_xlabel('Temperature (C)', color = 'red')\nax1.set_ylabel('Depth (m)')\nax2.set_xlabel('Salinity (PSU)', color = 'blue')\nttl='ARGO T,S Profiles at lon:{:6.2f}, lat:{:6.2f}\\ntimestamp: {} received: {}\\n'.format(lon,lat,date,receipt)\nfig.suptitle(ttl,x = 0.5,y = 1.1,fontsize = 'large');" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fja05680/pinkfish
examples/A00.update-cache-symbols/update-cache-symbols.ipynb
mit
[ "Update Cache Symbols\nA useful utility for demonstating the use of the update/remove cache symbols functions. You can use this notebook to periodically update all of the timeseries in your symbol cache.", "import pandas as pd\nimport pinkfish as pf", "Update time series for the symbols below.\nTime series will be fetched for any symbols not already cached.", "pf.update_cache_symbols(symbols=['msft', 'orcl', 'tsla'])", "Remove the time series for TSLA", "pf.remove_cache_symbols(symbols=['tsla'])", "Update time series for all symbols in the cache directory", "pf.update_cache_symbols()", "Remove time series for all symbols in the cache directory", "# WARNING!!! - if you uncomment the line below, you'll wipe out\n# all the symbols in your cache directory\n#pf.remove_cache_symbols()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
danalexandru/Algo
FII-year3sem2-CN/Exam.ipynb
gpl-2.0
[ "Exam\nProblem 2. Interpolation\n\n\n\nLinear Spline Interpolation visualization, courtesy of codecogs\n\nInterpolation is a method of curve fitting.\nIn this problem, spline interpolation is considered\nPractical applications:\n+ estimating function values based on some sample of known data points\n\nProblem\nGiven the inputs and function values below, approximate f(-1) and f(1) by linear spline functions.", "from IPython.display import display\nimport pandas as pd\nimport matplotlib.pyplot\n%matplotlib inline\n\nindex = ['f(x)']\ncolumns = [-2, 0, 2, 3]\ndata = [[-3, -5, 9, 22]]\n\ndf = pd.DataFrame(data, index=index, columns=columns)\nprint(df)\n\n# for brevity, we will write it like this\nindex = [' x', 'f(x)']\ncolumns = [1, 2, 3, 4] #['x1', 'x2', 'x3', 'x4']\ndata = [[-2, 0, 2, 3], [-3, -5, 9, 22]]\n\ndf = pd.DataFrame(data, index=index, columns=columns)\ndisplay(df)\n\nmatplotlib.pyplot.plot(data[0], data[1], ls='dashed', color='#a23636')\nmatplotlib.pyplot.scatter(data[0], data[1])\nmatplotlib.pyplot.show()", "Linear spline functions are calculated with the following:\n$$i \\in [1,\\ \\left\\vert{X}\\right\\vert - 1],\\ i \\in \\mathbb{N}: $$\n$$P_i = \\frac{x-x_i}{x_{i+1}-x_i} * y_{i+1} + \\frac{x_{i+1}-x}{x_{i+1}-x_i} * y_i$$\nBy simplification, we can reduce to the following:\n$$P_i = \\frac{y_{i+1} (x-x_i) + y_i (x_{i+1}-x)}{x_{i+1}-x_i} = \\frac{(y_{i+1}x - y_ix) - y_{i+1}x_i + y_ix_{i+1}}{x_{i+1}-x_i}$$\nThe final form used will be:\n$$P_i = \\frac{(y_{i+1}x - y_ix) + (y_ix_{i+1} - y_{i+1}x_i)}{(x_{i+1}-x_i)}$$\nAs it can be seen, the only gist would be to emulate the x in the first term (num1s below), the other terms being numbers (num2, den). Parantheses used to isolate the formula for each of the 3 variables. \nAs such, we can write the parantheses as a string, while the others will be simply calculated. After this, the final string is evaluated as a lambda function.", "print('x1 = %i' % data[0][0])\nprint('y1 = %i' % data[1][0])\nprint('---')\n\n# linear spline function aproximation\nprint('no values: %i' % len(columns))\n\nspline = {}\n\nfor i in range(len(columns)-1):\n print('\\nP[' + str(i+1) + ']')\n \n # we calculate the numerator\n num_1s = str(data[1][i+1]) + ' * x - ' + str(data[1][i]) + ' * x'\n print('num_1s: %s' % num_1s)\n \n num_2 = data[1][i] * data[0][i+1] - data[1][i+1] * data[0][i]\n print('num_2: %i' % num_2)\n \n # we calculate the denominator\n den = data[0][i+1] - data[0][i]\n print('den: %i' % den)\n \n # constructing the function\n func = 'lambda x: (' + num_1s + str(num_2) + ') / ' + str(den)\n print('func: %s' % func)\n spline[i] = eval(func)\n\nprint('---')\n\n# sanity checks\n# P1(x) = -x - 5\nassert (spline[0](-5) == 0),\"For this example, the value should be 0, but the value returned is \" + str(spline[0](-5))\n# P2(x) = 4x + 1\n# TODO: this is failing (checked my solution, probably my assertion is wrong) !\n#assert (spline[1](0) == 1),\"For this example, the value should be 1, but the value returned is \" + str(spline[1](0))\n# P3(x) = 13x - 17\nassert (spline[2](1) == -4),\"For this example, the value should be -4, but the value returned is \" + str(spline[2](1))\n\nprint('Approximating values of S\\n---')\naproximation_queue = [-1, 1]\nresults = {}\n\ndef approximate(spline, val):\n for i in range(len(spline)-1):\n if data[0][i] <= val <= data[0][i+1]:\n print('Approximation using P[%i] is: %i' % (i, spline[i](val)))\n results[val] = spline[i](val)\n\nfor i in range(len(aproximation_queue)):\n approximate(spline, aproximation_queue[i])\n\n# sanity checks\n# S(-1) = P1(-1) = -4\nassert (spline[0](-1) == -4),\"For this example, the value should be -4, but the value returned is \" + str(spline[0](-5))\n# S(1) = P2(1) = 5\n# TODO: same as above !\n#assert (spline[1](1) == 5),\"For this example, the value should be 5, but the value returned is \" + str(spline[1](0))\n\n#x.extend(results.keys())\n#y.extend(results.values())\nx2 = list(results.keys())\ny2 = list(results.values())\n\nmatplotlib.pyplot.plot(data[0], data[1], ls='dashed', color='#a23636')\nmatplotlib.pyplot.scatter(data[0], data[1])\nmatplotlib.pyplot.scatter(x2, y2, color='#ff0000')\n\nmatplotlib.pyplot.show()", "As it can be seen, in linear spline interpolation, all approximations will be found on the line.\nDepending on the sample size and on the original function this may result in deviation from the function curve." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
AlJohri/DAT-DC-12
notebooks/intro-python.ipynb
mit
[ "Introduction to Python\nForked from Lecture 1 of Scientific Python Lectures by J.R. Johansson\nPython Program Files\n\nPython code is usually stored in text files with the file ending in \".py\":\n myprogram.py\nEvery line in a Python program file is assumed to be a Python statement, or part thereof. \nThe only exception is comment lines, which start with the character # (optionally preceded by an arbitrary number of white-space characters, i.e., tabs or spaces). Comment lines are usually ignored by the Python interpreter.\n```\n\nthis is a comment\n```\n\nTo run our Python program from the command line we use:\n $ python myprogram.py\nOn UNIX systems it is common to define the path to the interpreter on the first line of the program (note that this is a comment line as far as the Python interpreter is concerned):\n #!/usr/bin/env python\n\nIf we do, and if we additionally set the file script to be executable, we can run the program like this:\n $ myprogram.py\nExample:", "!ls ../scripts/hello-world*.py\n\n!cat ../scripts/hello-world.py\n\n!python scripts/hello-world.py", "Jupyter Notebooks\nThis file - a Jupyter (IPython) notebook - does not follow the standard pattern with Python code in a text file. Instead, an IPython notebook is stored as a file in the JSON format. The advantage is that we can mix formatted text, Python code and code output. It requires the IPython notebook server to run it though, and therefore isn't a stand-alone Python program as described above. Other than that, there is no difference between the Python code that goes into a program file or an IPython notebook.\nModules\nMost of the functionality in Python is provided by modules. The Python Standard Library is a large collection of modules that provides cross-platform implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.\nReferences\n\nThe Python Language Reference: https://docs.python.org/3/reference/index.html\nThe Python Standard Library: https://docs.python.org/3/library/\n\nTo use a module in a Python program it first has to be imported. A module can be imported using the import statement. For example, to import the module math, which contains many standard mathematical functions, we can do:", "import math", "This includes the whole module and makes it available for use later in the program. For example, we can do:", "import math\nx = math.cos(2 * math.pi)\nprint(x)", "Alternatively, we can chose to import all symbols (functions and variables) in a module to the current namespace (so that we don't need to use the prefix \"math.\" every time we use something from the math module:", "from math import *\nx = cos(2 * pi)\nprint(x)", "This pattern can be very convenient, but in large programs that include many modules it is often a good idea to keep the symbols from each module in their own namespaces, by using the import math pattern. This would elminate potentially confusing problems with name space collisions.\nAs a third alternative, we can chose to import only a few selected symbols from a module by explicitly listing which ones we want to import instead of using the wildcard character *:", "from math import cos, pi\nx = cos(2 * pi)\nprint(x)", "Looking at what a module contains, and its documentation\nOnce a module is imported, we can list the symbols it provides using the dir function:", "import math\nprint(dir(math))", "And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).", "help(math.log)\n\nmath.log(10)\n\nmath.log(10, 2)", "We can also use the help function directly on modules: Try\nhelp(math)\n\nSome very useful modules form the Python standard library are os, sys, math, shutil, re, subprocess, multiprocessing, threading. \nA complete lists of standard modules for Python 3 are available at http://docs.python.org/3/library/.\nFor example, this is the os module in the standard library.", "import os\nos.listdir()", "Variables and types\nSymbol names\nVariable names in Python can contain alphanumerical characters a-z, A-Z, 0-9 and some special characters such as _. Normal variable names must start with a letter. \nBy convention, variable names start with a lower-case letter, and Class names start with a capital letter. \nIn addition, there are a number of Python keywords that cannot be used as variable names. These keywords are:\nand, as, assert, break, class, continue, def, del, elif, else, except, \nexec, finally, for, from, global, if, import, in, is, lambda, not, or,\npass, print, raise, return, try, while, with, yield\n\nAssignment\nThe assignment operator in Python is =. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.\nAssigning a value to a new variable creates the variable:", "# variable assignments\nx = 1.0\nmy_variable = 12.2", "Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.", "type(x)", "If we assign a new value to a variable, its type can change.", "x = 1\n\ntype(x)", "If we try to use a variable that has not yet been defined we get an NameError:", "import traceback \n\ntry:\n print(y)\nexcept NameError as e:\n print(traceback.format_exc())", "Fundamental types", "# integers\nx = 1\ntype(x)\n\n# float\nx = 1.0\ntype(x)\n\n# boolean\nb1 = True\nb2 = False\n\ntype(b1)\n\n# complex numbers: note the use of `j` to specify the imaginary part\nx = 1.0 - 1.0j\ntype(x)\n\nprint(x)\n\nprint(x.real, x.imag)", "Type utility functions", "x = 1.0\n\n# check if the variable x is a float\ntype(x) is float\n\n# check if the variable x is an int\ntype(x) is int", "We can also use the isinstance method for testing types of variables:", "isinstance(x, float)", "Type casting", "x = 1.5\n\nprint(x, type(x))\n\nx = int(x)\n\nprint(x, type(x))\n\nz = complex(x)\n\nprint(z, type(z))\n\nimport traceback \n\ntry:\n x = float(z)\nexcept TypeError as e:\n print(traceback.format_exc())", "Operators and comparisons\nMost operators and comparisons in Python work as one would expect:\n\nArithmetic operators +, -, *, /, // (integer division), '**' power", "1 + 2, 1 - 2, 1 * 2, 1 / 2\n\n1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0\n\n# Integer division of float numbers\n3.0 // 2.0\n\n# Note! The power operators in python isn't ^, but **\n2 ** 2", "Note: The / operator always performs a floating point division in Python 3.x.\nThis is not true in Python 2.x, where the result of / is always an integer if the operands are integers.\nto be more specific, 1/2 = 0.5 (float) in Python 3.x, and 1/2 = 0 (int) in Python 2.x (but 1.0/2 = 0.5 in Python 2.x).\nThe boolean operators are spelled out as the words and, not, or.", "True and False\n\nnot False\n\nTrue or False", "Comparison operators &gt;, &lt;, &gt;= (greater or equal), &lt;= (less or equal), == equality, is identical.", "2 > 1, 2 < 1\n\n2 > 2, 2 < 2\n\n2 >= 2, 2 <= 2\n\n# equality\n[1,2] == [1,2]\n\n# objects identical?\nlist1 = list2 = [1,2]\n\nlist1 is list2", "Exercise:\nMindy has $5.25 in her pocket. Apples cost 29 cents each. Calculate how many apples mindy can buy and how much change she will have left. Money should be represented in variables of type float and apples should be represented in variables of type integer.\nAnswer:\n\nmindy_money = 5.25\napple_cost = .29\nnum_apples = ?\nchange = ?\n\nCompound types: Strings, List and dictionaries\nStrings\nStrings are the variable type that is used for storing text messages.", "s = \"Hello world\"\ntype(s)\n\n# length of the string: the number of characters\nlen(s)\n\n# replace a substring in a string with somethign else\ns2 = s.replace(\"world\", \"test\")\nprint(s2)", "We can index a character in a string using []:", "s[0]", "Heads up MATLAB and R users: Indexing start at 0!\nWe can extract a part of a string using the syntax [start:stop], which extracts characters between index start and stop -1 (the character at index stop is not included):", "s[0:5]\n\ns[4:5]", "If we omit either (or both) of start or stop from [start:stop], the default is the beginning and the end of the string, respectively:", "s[:5]\n\ns[6:]\n\ns[:]", "We can also define the step size using the syntax [start:end:step] (the default value for step is 1, as we saw above):", "s[::1]\n\ns[::2]", "This technique is called slicing. Read more about the syntax here: https://docs.python.org/3.5/library/functions.html#slice\nPython has a very rich set of functions for text processing. See for example https://docs.python.org/3.5/library/string.html for more information.\nString formatting examples", "print(\"str1\", \"str2\", \"str3\") # The print statement concatenates strings with a space\n\nprint(\"str1\", 1.0, False, -1j) # The print statements converts all arguments to strings\n\nprint(\"str1\" + \"str2\" + \"str3\") # strings added with + are concatenated without space\n\nprint(\"value = %f\" % 1.0) # we can use C-style string formatting\n\n# this formatting creates a string\ns2 = \"value1 = %.2f. value2 = %d\" % (3.1415, 1.5)\n\nprint(s2)\n\n# alternative, more intuitive way of formatting a string \ns3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5)\n\nprint(s3)", "Exercise:\nPaste in the code from your previous exercise and output the result as a story (round monetary values to 2 decimal places). The ouptut should look like this:\n\"Mindy had \\$5.25 in her pocket. Apples at her nearby store cost 29 cents. With her \\$5.25, mindy can buy 18 apples and will have 10 cents left over.\"\nList\nLists are very similar to strings, except that each element can be of any type.\nThe syntax for creating lists in Python is [...]:", "l = [1,2,3,4]\n\nprint(type(l))\nprint(l)", "We can use the same slicing techniques to manipulate lists as we could use on strings:", "print(l)\n\nprint(l[1:3])\n\nprint(l[::2])", "Heads up MATLAB and R users: Indexing starts at 0!", "l[0]", "Elements in a list do not all have to be of the same type:", "l = [1, 'a', 1.0, 1-1j]\n\nprint(l)", "Python lists can be heterogeneous and arbitrarily nested:", "nested_list = [1, [2, [3, [4, [5]]]]]\n\nnested_list", "Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the range function:", "start = 10\nstop = 30\nstep = 2\n\nrange(start, stop, step)\n\n# in python 3 range generates an iterator, which can be converted to a list using 'list(...)'.\n# It has no effect in python 2\nlist(range(start, stop, step))\n\nlist(range(-10, 10))\n\ns\n\n# convert a string to a list by type casting:\ns2 = list(s)\n\ns2\n\n# sorting lists (by creating a new variable)\n\ns3 = sorted(s2)\n\nprint(s2)\nprint(s3)\n\n# sorting lists in place\ns2.sort()\n\nprint(s2)", "Adding, inserting, modifying, and removing elements from lists", "# create a new empty list\nl = []\n\n# add an elements using `append`\nl.append(\"A\")\nl.append(\"d\")\nl.append(\"d\")\n\nprint(l)", "We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable.", "l[1] = \"p\"\nl[2] = \"p\"\n\nprint(l)\n\nl[1:3] = [\"d\", \"d\"]\n\nprint(l)", "Insert an element at an specific index using insert", "l.insert(0, \"i\")\nl.insert(1, \"n\")\nl.insert(2, \"s\")\nl.insert(3, \"e\")\nl.insert(4, \"r\")\nl.insert(5, \"t\")\n\nprint(l)", "Remove first element with specific value using 'remove'", "l.remove(\"A\")\n\nprint(l)", "Remove an element at a specific location using del:", "del l[7]\ndel l[6]\n\nprint(l)", "See help(list) for more details, or read the online documentation \nTuples\nTuples are like lists, except that they cannot be modified once created, that is they are immutable. \nIn Python, tuples are created using the syntax (..., ..., ...), or even ..., ...:", "point = (10, 20)\n\nprint(point, type(point))\n\npoint = 10, 20\n\nprint(point, type(point))", "We can unpack a tuple by assigning it to a comma-separated list of variables:", "x, y = point\n\nprint(\"x =\", x)\nprint(\"y =\", y)", "If we try to assign a new value to an element in a tuple we get an error:", "try:\n point[0] = 20\nexcept TypeError as e:\n print(traceback.format_exc())", "Dictionaries\nDictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is {key1 : value1, ...}:", "params = {\"parameter1\" : 1.0,\n \"parameter2\" : 2.0,\n \"parameter3\" : 3.0,}\n\nprint(type(params))\nprint(params)\n\nprint(\"parameter1 = \" + str(params[\"parameter1\"]))\nprint(\"parameter2 = \" + str(params[\"parameter2\"]))\nprint(\"parameter3 = \" + str(params[\"parameter3\"]))\n\nparams[\"parameter1\"] = \"A\"\nparams[\"parameter2\"] = \"B\"\n\n# add a new entry\nparams[\"parameter4\"] = \"D\"\n\nprint(\"parameter1 = \" + str(params[\"parameter1\"]))\nprint(\"parameter2 = \" + str(params[\"parameter2\"]))\nprint(\"parameter3 = \" + str(params[\"parameter3\"]))\nprint(\"parameter4 = \" + str(params[\"parameter4\"]))", "Exercise:\nMindy doesn't want 18 apples, that's too many for someone who lives by themselves. We're now going to represent mindy's world using our new data types.\nMake a list containing the fruits that mindy desires. She likes apples, strawberries, pinapples, and papayas.\nMake a tuple containing the fruits that the store has. This is immutable because the store doesn't change their inventory. The local store has apples, strawberries, pinapples, pears, bananas, and oranges.\nMake a dictonary showing the price of each fruit at the store. Apples are 29 cents, bananas are 5 cents, oranges are 20 cents, strawberries are 30 cents and pinapples are $1.50.\nControl Flow\nConditional statements: if, elif, else\nThe Python syntax for conditional execution of code uses the keywords if, elif (else if), else:", "statement1 = False\nstatement2 = False\n\nif statement1:\n print(\"statement1 is True\")\n \nelif statement2:\n print(\"statement2 is True\")\n \nelse:\n print(\"statement1 and statement2 are False\")", "For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level. \nCompare to the equivalent C code:\nif (statement1)\n{\n printf(\"statement1 is True\\n\");\n}\nelse if (statement2)\n{\n printf(\"statement2 is True\\n\");\n}\nelse\n{\n printf(\"statement1 and statement2 are False\\n\");\n}\n\nIn C blocks are defined by the enclosing curly brakets { and }. And the level of indentation (white space before the code statements) does not matter (completely optional). \nBut in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors. \nExamples:", "statement1 = statement2 = True\n\nif statement1:\n if statement2:\n print(\"both statement1 and statement2 are True\")\n\n# # Bad indentation!\n# if statement1:\n# if statement2:\n# print(\"both statement1 and statement2 are True\") # this line is not properly indented\n\nstatement1 = False \n\nif statement1:\n print(\"printed if statement1 is True\")\n \n print(\"still inside the if block\")\n\nif statement1:\n print(\"printed if statement1 is True\")\n \nprint(\"now outside the if block\")", "Loops\nIn Python, loops can be programmed in a number of different ways. The most common is the for loop, which is used together with iterable objects, such as lists. The basic syntax is:\nfor loops:", "for x in [1,2,3]:\n print(x)", "The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example:", "for x in range(4): # by default range start at 0\n print(x)", "Note: range(4) does not include 4 !", "for x in range(-3,3):\n print(x)\n\nfor word in [\"scientific\", \"computing\", \"with\", \"python\"]:\n print(word)", "To iterate over key-value pairs of a dictionary:", "for key, value in params.items():\n print(key + \" = \" + str(value))", "Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this:", "for idx, x in enumerate(range(-3,3)):\n print(idx, x)", "List comprehensions: Creating lists using for loops:\nA convenient and compact way to initialize lists:", "l1 = [x**2 for x in range(0,5)]\n\nprint(l1)", "while loops:", "i = 0\n\nwhile i < 5:\n print(i)\n \n i = i + 1\n \nprint(\"done\")", "Note that the print(\"done\") statement is not part of the while loop body because of the difference in indentation.\nExercise:\nLoop through all of the fruits that mindy wants and check if the store has them. For each fruit that she wants print \n\"Mindy, the store has apples and they cost $.29\"\nor\n\"Mindy, the store does not have papayas\"\nFunctions\nA function in Python is defined using the keyword def, followed by a function name, a signature within parentheses (), and a colon :. The following code, with one additional level of indentation, is the function body.", "def func0(): \n print(\"test\")\n\nfunc0()", "Optionally, but highly recommended, we can define a so called \"docstring\", which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.", "def func1(s):\n \"\"\"\n Print a string 's' and tell how many characters it has \n \"\"\"\n \n print(s + \" has \" + str(len(s)) + \" characters\")\n\nhelp(func1)\n\nfunc1(\"test\")", "Functions that returns a value use the return keyword:", "def square(x):\n \"\"\"\n Return the square of x.\n \"\"\"\n return x ** 2\n\nsquare(4)", "We can return multiple values from a function using tuples (see above):", "def powers(x):\n \"\"\"\n Return a few powers of x.\n \"\"\"\n return x ** 2, x ** 3, x ** 4\n\npowers(3)\n\nx2, x3, x4 = powers(3)\n\nprint(x3)", "Default argument and keyword arguments\nIn a definition of a function, we can give default values to the arguments the function takes:", "def myfunc(x, p=2, debug=False):\n if debug:\n print(\"evaluating myfunc for x = \" + str(x) + \" using exponent p = \" + str(p))\n return x**p", "If we don't provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition:", "myfunc(5)\n\nmyfunc(5, debug=True)", "If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments.", "myfunc(p=3, debug=True, x=7)", "Unnamed functions (lambda function)\nIn Python we can also create unnamed functions, using the lambda keyword:", "f1 = lambda x: x**2\n \n# is equivalent to \n\ndef f2(x):\n return x**2\n\nf1(2), f2(2)", "This technique is useful for example when we want to pass a simple function as an argument to another function, like this:", "# map is a built-in python function\nmap(lambda x: x**2, range(-3,4))\n\n# in python 3 we can use `list(...)` to convert the iterator to an explicit list\nlist(map(lambda x: x**2, range(-3,4)))", "Exercise:\nMindy is great, but we want code that can tell anyone what fruits the store has. To do this we will generalize our code for Mindy using a function.\nWrite a function that takes the following parameters\n- full_name (string)\n- fruits_you_want (list)\n- fruits_the_store_has (tuple)\n- prices (dict)\nand prints to the terminal a sentence per fruit that you want just like the last exercise. For example, if \nname = 'Al'\nlist_of_fruits_you_want = ['apple', 'banana']\ntuple_of_fruits_the_store_has = ('apple', 'banana', 'orange', 'strawberries', 'pineapple')\nprices = {\n 'apple' : .29\n 'banana': .05\n 'orange': .20\n 'strawberries': .30\n 'pinapple': 1.50\n}\nThe function should print.\n\"Al, the store has apples and they cost \\$.29\"\n\"Al, the store has bananas and they cost \\$.05\"\nClasses\nClasses are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object. \nIn Python a class can contain attributes (variables) and methods (functions).\nA class is defined almost like a function, but using the class keyword, and the class definition usually contains a number of class method definitions (a function in a class).\n\n\nEach class method should have an argument self as its first argument. This object is a self-reference.\n\n\nSome class method names have special meaning, for example:\n\n__init__: The name of the method that is invoked when the object is first created.\n__str__ : A method that is invoked when a simple string representation of the class is needed, as for example when printed.\nThere are many more, see http://docs.python.org/2/reference/datamodel.html#special-method-names", "class Point:\n \"\"\"\n Simple class for representing a point in a Cartesian coordinate system.\n \"\"\"\n \n def __init__(self, x, y):\n \"\"\"\n Create a new Point at x, y.\n \"\"\"\n self.x = x\n self.y = y\n \n def translate(self, dx, dy):\n \"\"\"\n Translate the point by dx and dy in the x and y direction.\n \"\"\"\n self.x += dx\n self.y += dy\n \n def __str__(self):\n return(\"Point at [%f, %f]\" % (self.x, self.y))", "To create a new instance of a class:", "p1 = Point(0, 0) # this will invoke the __init__ method in the Point class\n\nprint(p1) # this will invoke the __str__ method", "To invoke a class method in the class instance p:", "p2 = Point(1, 1)\n\np1.translate(0.25, 1.5)\n\nprint(p1)\nprint(p2)", "Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables.\nThat is one of the nice things about object-oriented design: code such as functions and related variables are grouped in separate and independent entities. \nModules\nOne of the most important concepts in good programming is to reuse code and avoid repetitions.\nThe idea is to write functions and classes with a well-defined purpose and scope, and reuse these instead of repeating similar code in different part of a program (modular programming). The result is usually that readability and maintainability of a program is greatly improved. What this means in practice is that our programs have fewer bugs, are easier to extend and debug/troubleshoot. \nPython supports modular programming at different levels. Functions and classes are examples of tools for low-level modular programming. Python modules are a higher-level modular programming construct, where we can collect related variables, functions and classes in a module. A python module is defined in a python file (with file-ending .py), and it can be made accessible to other Python modules and programs using the import statement. \nConsider the following example: the file mymodule.py contains simple example implementations of a variable, function and a class:", "%%file mymodule.py\n\"\"\"\nExample of a python module. Contains a variable called my_variable,\na function called my_function, and a class called MyClass.\n\"\"\"\n\nmy_variable = 0\n\ndef my_function():\n \"\"\"\n Example function\n \"\"\"\n return my_variable\n \nclass MyClass:\n \"\"\"\n Example class.\n \"\"\"\n\n def __init__(self):\n self.variable = my_variable\n \n def set_variable(self, new_value):\n \"\"\"\n Set self.variable to a new value\n \"\"\"\n self.variable = new_value\n \n def get_variable(self):\n return self.variable", "We can import the module mymodule into our Python program using import:", "import mymodule", "Use help(module) to get a summary of what the module provides:", "help(mymodule)\n\nmymodule.my_variable\n\nmymodule.my_function() \n\nmy_class = mymodule.MyClass() \nmy_class.set_variable(10)\nmy_class.get_variable()", "If we make changes to the code in mymodule.py, we need to reload it using reload:", "import importlib\nimportlib.reload(mymodule) # Python 3 only\n# For Python 2 use reload(mymodule)", "Exceptions\nIn Python errors are managed with a special language construct called \"Exceptions\". When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined.\nTo generate an exception we can use the raise statement, which takes an argument that must be an instance of the class BaseException or a class derived from it.", "try:\n raise Exception(\"description of the error\")\nexcept Exception as e:\n print(traceback.format_exc())", "A typical use of exceptions is to abort functions when some error condition occurs, for example:\ndef my_function(arguments):\n\n if not verify(arguments):\n raise Exception(\"Invalid arguments\")\n\n # rest of the code goes here\n\nTo gracefully catch errors that are generated by functions and class methods, or by the Python interpreter itself, use the try and except statements:\ntry:\n # normal code goes here\nexcept:\n # code for error handling goes here\n # this code is not executed unless the code\n # above generated an error\n\nFor example:", "try:\n print(\"test\")\n # generate an error: the variable test is not defined\n print(test)\nexcept Exception:\n print(\"Caught an exception\")", "To get information about the error, we can access the Exception class instance that describes the exception by using for example:\nexcept Exception as e:", "try:\n print(\"test\")\n # generate an error: the variable test is not defined\n print(test)\nexcept Exception as e:\n print(\"Caught an exception:\", e)", "Excercise:\nMake two classes with the following variables and methods\nStore\nvariables\n- inventory (dict)\nmethods\n- show_inventory()\n * nicely displays the store's inventory\n- message_customer(customer)\n * shows the customer if the store has the fruits they want and how much each fruit costs (this is code from previous exercises, except it will use the customer's \"Formal Greeting\" instead of their first name.) \nCustomer \nvariables\n- first_name (string)\n- last_name (string)\n- is_male (boolean)\n- money (float)\n- fruit (dict)\n- preferred_fruit (list)\nmethods\n- formal_greeting()\n * (Mr. Al Johri, Ms. Mindy Smith)\n- buy_fruit(store, fruit_name, fruit_amt)\n * inputs are a store, the name of a fruit, and the amount of that fruit\n * checks to see if the store has the fruit - returns an error if it does not\n * checks to see if the customer can afford the amount of fruit they intend to buy - returns an error if not\n * \"purchases\" the fruit by adding it to the customers fruit dict and removes the correct amount of money from their money variable\nExercise:\nInstantiate a list of the following customers.\n\nMindy Smith - \\$5.25, likes apples and oranges\nAl Johri - \\$20.19, likes papaya, strawberries, pinapple, and apples \nHillary Clinton - \\$15, likes strawberries and oranges \nOliver Twist - \\$.05, likes apples \nDonald Trump - \\$4000, only likes durian\n\nCreate a store called Whole Foods with the following inventory\n * 'apple' : \\$.29\n * 'banana': \\$.05\n * 'orange': \\$.20\n * 'strawberries': \\$.30\n * 'pinapple': \\$1.50,\n * 'grapes': \\$.22,\n * 'durian': \\$5000\nWrite code to do the following\n\nPrint the store's inventory\nFor each customer, print the store's message to them\nHave each customer purchase 1 of each fruit in their list of preferred fruits. \n\n(Make sure you have error handling so the program doesn't halt if the store doesn't have the fruit the customer wants or if the customer doesn not have enough money to buy the fruit.)\nBonus Exercise\nOrganize the code! Make a module called fruits (in a file fruits.py) that contains the class definitions.\nMake a separate cell (in a file main.py) which imports the fruits module, instantiates the store and the list of customers, and runs the code in the previous exercise.\nFurther reading\n\nhttp://www.python.org - The official web page of the Python programming language.\nhttps://docs.python.org/3/tutorial/ - The Official Python Tutorial\nhttp://www.python.org/dev/peps/pep-0008 - Style guide for Python programming. Highly recommended. \nhttp://www.scipy-lectures.org/intro/language/python_language.html - Scipy Lectures: Lecture 1.2\n\nVersions", "%reload_ext version_information\n%version_information" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DAInamite/programming-humanoid-robot-in-python
kinematics/inverse_kinematics_2d_jax.ipynb
gpl-2.0
[ "Inverse Kinematics (2D)\nwith https://github.com/google/jax\nnote\n* running on GPU with colab: https://colab.research.google.com/drive/1guZnXsFOEVLb7IOXVzUgRc8pbq3Z50Qf", "%matplotlib notebook\nfrom matplotlib import pylab as plt\nfrom numpy import random, pi\nfrom __future__ import division\nfrom IPython import display\nfrom ipywidgets import interact, fixed\n\nimport jax.numpy as np\nfrom jax import grad, jit", "Coordinate Transformation", "def trans(x, y, a):\n '''create a 2D transformation'''\n s = np.sin(a)\n c = np.cos(a)\n return np.asarray([[c, -s, x],\n [s, c, y],\n [0, 0, 1]])\n\ndef from_trans(m):\n '''get x, y, theta from transform matrix'''\n a = np.arctan2(m[1, 0], m[0, 0])\n return np.asarray([m[0, -1], m[1, -1], a])\n\nprint(trans(0., 0., 0.))", "Parameters of robot arm", "l = [0, 3, 2, 1]\n#l = [0, 3, 2, 1, 1]\n#l = [0, 3, 2, 1, 1, 1]\n#l = [1] * 30\nN = len(l) - 1 # number of links\nmax_len = sum(l)\na = random.random_sample(N) # angles of joints\nT0 = trans(0, 0, 0) # base", "Forward Kinematics", "def forward_kinematics(T0, l, a):\n T = [T0]\n for i in range(len(a)):\n Ti = np.dot(T[-1], trans(l[i], 0, a[i]))\n T.append(Ti)\n Te = np.dot(T[-1], trans(l[-1], 0, 0)) # end effector\n T.append(Te)\n return T\n\ndef show_robot_arm(T):\n plt.cla()\n x = [Ti[0,-1] for Ti in T]\n y = [Ti[1,-1] for Ti in T]\n plt.plot(x, y, '-or', linewidth=5, markersize=10)\n plt.plot(x[-1], y[-1], 'og', linewidth=5, markersize=10)\n plt.xlim([-max_len, max_len])\n plt.ylim([-max_len, max_len]) \n ax = plt.axes()\n ax.set_aspect('equal')\n t = np.arctan2(T[-1][1, 0], T[-1][0,0])\n ax.annotate('[%.2f,%.2f,%.2f]' % (x[-1], y[-1], t), xy=(x[-1], y[-1]), xytext=(x[-1], y[-1] + 0.5))\n plt.show\n return ax", "Inverse Kinematics\nNumerical Solution: jax", "def error_func(theta, target):\n Ts = forward_kinematics(T0, l, theta)\n Te = Ts[-1]\n e = target - Te\n return np.sum(e * e)\n\ntheta = random.random(N)\ndef inverse_kinematics(x_e, y_e, theta_e, theta):\n target = trans(x_e, y_e, theta_e)\n func = lambda t: error_func(t, target)\n func_grad = jit(grad(func))\n \n for i in range(1000):\n e = func(theta)\n d = func_grad(theta)\n theta -= d * 1e-2\n if e < 1e-4:\n break\n \n return theta\n\nT = forward_kinematics(T0, l, theta)\nshow_robot_arm(T)\nTe = np.asarray([from_trans(T[-1])])\n\n@interact(x_e=(0, max_len, 0.01), y_e=(-max_len, max_len, 0.01), theta_e=(-pi, pi, 0.01), theta=fixed(theta))\ndef set_end_effector(x_e=Te[0,0], y_e=Te[0,1], theta_e=Te[0,2], theta=theta):\n theta = inverse_kinematics(x_e, y_e, theta_e, theta)\n T = forward_kinematics(T0, l, theta)\n show_robot_arm(T)\n return theta\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cwehmeyer/pydpc
ipython/Example01.ipynb
lgpl-3.0
[ "Example and timings\nThis notebook gives a short introduction in how to use pydpc for a simple clustering problem.", "%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom pydpc import Cluster\nfrom pydpc._reference import Cluster as RefCluster", "We start with preparing the data points for clustering. The data is two-dimensional and craeted by drawing random numbers from four superpositioned gaussian distributions which are centered at the corners of a square (indicated by the red dashed lines).", "# generate the data points\nnpoints = 2000\nmux = 1.6\nmuy = 1.6\npoints = np.zeros(shape=(npoints, 2), dtype=np.float64)\npoints[:, 0] = np.random.randn(npoints) + mux * (-1)**np.random.randint(0, high=2, size=npoints)\npoints[:, 1] = np.random.randn(npoints) + muy * (-1)**np.random.randint(0, high=2, size=npoints)\n# draw the data points\nfig, ax = plt.subplots(figsize=(5, 5))\nax.scatter(points[:, 0], points[:, 1], s=40)\nax.plot([-mux, -mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color=\"red\")\nax.plot([mux, mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color=\"red\")\nax.plot([-1.5 * mux, 1.5 * mux], [-muy, -muy], '--', linewidth=2, color=\"red\")\nax.plot([-1.5 * mux, 1.5 * mux], [muy, muy], '--', linewidth=2, color=\"red\")\nax.set_xlabel(r\"x / a.u.\", fontsize=20)\nax.set_ylabel(r\"y / a.u.\", fontsize=20)\nax.tick_params(labelsize=15)\nax.set_xlim([-7, 7])\nax.set_ylim([-7, 7])\nax.set_aspect('equal')\nfig.tight_layout()", "Now comes the interesting part.\nWe pass the numpy ndarray with the data points to the Cluster class which prepares the data set for clustering. In this stage, it computes the Euclidean distances between all data points and from that the two properties to identify clusters within the data: each data points' density and minimal distance delta to a point of higher density.\nOnce these properties are computed, a decision graph is drawn, where each outlier in the upper right corner represents a different cluster. In our example, we should find four outliers. So far, however, no clustering has yet been done.", "clu = Cluster(points)", "Now that we have the decision graph, we can select the outliers via the assign method by setting lower bounds for delta and density. The assign method does the actual clustering; it also shows the decision graph again with the given selection.", "clu.assign(20, 1.5)", "Let us have a look at the result.\nWe again plot the data and red dashed lines indicating the centeres of the gaussian distributions. Indicated in the left panel by red dots are the four outliers from the decision graph; these are our four cluster centers. The center panel shows the points' densities and the right panel shows the membership to the four clusters by different coloring.", "fig, ax = plt.subplots(1, 3, figsize=(15, 5))\nax[0].scatter(points[:, 0], points[:, 1], s=40)\nax[0].scatter(points[clu.clusters, 0], points[clu.clusters, 1], s=50, c=\"red\")\nax[1].scatter(points[:, 0], points[:, 1], s=40, c=clu.density)\nax[2].scatter(points[:, 0], points[:, 1], s=40, c=clu.membership, cmap=mpl.cm.cool)\nfor _ax in ax:\n _ax.plot([-mux, -mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color=\"red\")\n _ax.plot([mux, mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color=\"red\")\n _ax.plot([-1.5 * mux, 1.5 * mux], [-muy, -muy], '--', linewidth=2, color=\"red\")\n _ax.plot([-1.5 * mux, 1.5 * mux], [muy, muy], '--', linewidth=2, color=\"red\")\n _ax.set_xlabel(r\"x / a.u.\", fontsize=20)\n _ax.set_ylabel(r\"y / a.u.\", fontsize=20)\n _ax.tick_params(labelsize=15)\n _ax.set_xlim([-7, 7])\n _ax.set_ylim([-7, 7])\n _ax.set_aspect('equal')\nfig.tight_layout()", "The density peak clusterng can further resolve if the membership of a data point to a certain cluster is strong or rather weak and separates the data points further into core and halo regions.\nThe left panel depicts the border members in grey.\nThe separation in the center panel uses the core/halo criterion of the original authors, the right panel shows a less strict criterion which assumes a halo only between different clusters; here, the halo members are depicted in grey.", "fig, ax = plt.subplots(1, 3, figsize=(15, 5))\nax[0].scatter(\n points[:, 0], points[:, 1],\n s=40, c=clu.membership, cmap=mpl.cm.cool)\nax[0].scatter(points[clu.border_member, 0], points[clu.border_member, 1], s=40, c=\"grey\")\nax[1].scatter(\n points[clu.core_idx, 0], points[clu.core_idx, 1],\n s=40, c=clu.membership[clu.core_idx], cmap=mpl.cm.cool)\nax[1].scatter(points[clu.halo_idx, 0], points[clu.halo_idx, 1], s=40, c=\"grey\")\nclu.autoplot=False\nclu.assign(20, 1.5, border_only=True)\nax[2].scatter(\n points[clu.core_idx, 0], points[clu.core_idx, 1],\n s=40, c=clu.membership[clu.core_idx], cmap=mpl.cm.cool)\nax[2].scatter(points[clu.halo_idx, 0], points[clu.halo_idx, 1], s=40, c=\"grey\")\nax[2].tick_params(labelsize=15)\nfor _ax in ax:\n _ax.plot([-mux, -mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color=\"red\")\n _ax.plot([mux, mux], [-1.5 * muy, 1.5 * muy], '--', linewidth=2, color=\"red\")\n _ax.plot([-1.5 * mux, 1.5 * mux], [-muy, -muy], '--', linewidth=2, color=\"red\")\n _ax.plot([-1.5 * mux, 1.5 * mux], [muy, muy], '--', linewidth=2, color=\"red\")\n _ax.set_xlabel(r\"x / a.u.\", fontsize=20)\n _ax.set_ylabel(r\"y / a.u.\", fontsize=20)\n _ax.tick_params(labelsize=15)\n _ax.set_xlim([-7, 7])\n _ax.set_ylim([-7, 7])\n _ax.set_aspect('equal')\nfig.tight_layout()", "This concludes the example.\nIn the remaining part, we address the performance of the pydpc implementation (numpy + cython-wrapped C code) with respect to an older development version (numpy). In particular, we look at the numerically most demanding part of computing the Euclidean distances between the data points and estimating density and delta.", "npoints = 1000\npoints = np.zeros(shape=(npoints, 2), dtype=np.float64)\npoints[:, 0] = np.random.randn(npoints) + 1.8 * (-1)**np.random.randint(0, high=2, size=npoints)\npoints[:, 1] = np.random.randn(npoints) + 1.8 * (-1)**np.random.randint(0, high=2, size=npoints)\n\n%timeit Cluster(points, fraction=0.02, autoplot=False)\n%timeit RefCluster(fraction=0.02, autoplot=False).load(points)", "The next two cells measure the full clustering.", "%%timeit\nCluster(points, fraction=0.02, autoplot=False).assign(20, 1.5)\n\n%%timeit\ntmp = RefCluster(fraction=0.02, autoplot=False)\ntmp.load(points)\ntmp.assign(20, 1.5)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
littlewizardLI/Udacity-ML-nanodegrees
Project-practice--naive_bayes_tutorial/Naive_Bayes_tutorial.ipynb
apache-2.0
[ "Our Mission\nSpam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'. \nIn this mission we will be using the Naive Bayes algorithm to create a model that can classify 'https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection' SMS messages as spam or not spam, based on the training we give to the model. It is important to have some level of intuition as to what a spammy text message might look like. Usually they have words like 'free', 'win', 'winner', 'cash', 'prize' and the like in them as these texts are designed to catch your eye and in some sense tempt you to open them. Also, spam messages tend to have words written in all capitals and also tend to use a lot of exclamation marks. To the recipient, it is usually pretty straightforward to identify a spam text and our objective here is to train a model to do that for us!\nBeing able to identify spam messages is a binary classification problem as messages are classified as either 'Spam' or 'Not Spam' and nothing else. Also, this is a supervised learning problem, as we will be feeding a labelled dataset into the model, that it can learn from, to make future predictions. \nStep 0: Introduction to the Naive Bayes Theorem\nBayes theorem is one of the earliest probabilistic inference algorithms developed by Reverend Bayes (which he used to try and infer the existence of God no less) and still performs extremely well for certain use cases. \nIt's best to understand this theorem using an example. Let's say you are a member of the Secret Service and you have been deployed to protect the Democratic presidential nominee during one of his/her campaign speeches. Being a public event that is open to all, your job is not easy and you have to be on the constant lookout for threats. So one place to start is to put a certain threat-factor for each person. So based on the features of an individual, like the age, sex, and other smaller factors like is the person carrying a bag?, does the person look nervous? etc. you can make a judgement call as to if that person is viable threat. \nIf an individual ticks all the boxes up to a level where it crosses a threshold of doubt in your mind, you can take action and remove that person from the vicinity. The Bayes theorem works in the same way as we are computing the probability of an event(a person being a threat) based on the probabilities of certain related events(age, sex, presence of bag or not, nervousness etc. of the person). \nOne thing to consider is the independence of these features amongst each other. For example if a child looks nervous at the event then the likelihood of that person being a threat is not as much as say if it was a grown man who was nervous. To break this down a bit further, here there are two features we are considering, age AND nervousness. Say we look at these features individually, we could design a model that flags ALL persons that are nervous as potential threats. However, it is likely that we will have a lot of false positives as there is a strong chance that minors present at the event will be nervous. Hence by considering the age of a person along with the 'nervousness' feature we would definitely get a more accurate result as to who are potential threats and who aren't. \nThis is the 'Naive' bit of the theorem where it considers each feature to be independant of each other which may not always be the case and hence that can affect the final judgement.\nIn short, the Bayes theorem calculates the probability of a certain event happening(in our case, a message being spam) based on the joint probabilistic distributions of certain other events(in our case, a message being classified as spam). We will dive into the workings of the Bayes theorem later in the mission, but first, let us understand the data we are going to work with.\nStep 1.1: Understanding our dataset ###\nWe will be using a 'https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection' dataset from the UCI Machine Learning repository which has a very good collection of datasets for experimental research purposes. \n Here's a preview of the data: \n<img src=\"images/dqnb.png\" height=\"1242\" width=\"1242\">\nThe columns in the data set are currently not named and as you can see, there are 2 columns. \nThe first column takes two values, 'ham' which signifies that the message is not spam, and 'spam' which signifies that the message is spam. \nThe second column is the text content of the SMS message that is being classified.\n\n Instructions: \n* Import the dataset into a pandas dataframe using the read_table method. Because this is a tab separated dataset we will be using '\\t' as the value for the 'sep' argument which specifies this format. \n* Also, rename the column names by specifying a list ['label, 'sms_message'] to the 'names' argument of read_table().\n* Print the first five values of the dataframe with the new column names.", "'''\nSolution\n'''\nimport pandas as pd\n# Dataset from - https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection\ndf = pd.read_table('smsspamcollection/SMSSpamCollection',\n sep='\\t', \n header=None, \n names=['label', 'sms_message'])\n\n# Output printing out first 5 columns\ndf.head()", "Step 1.2: Data Preprocessing\nNow that we have a basic understanding of what our dataset looks like, lets convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation. \nYou might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values). \nOur model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers. \n\nInstructions: \n* Convert the values in the 'label' colum to numerical values using map method as follows:\n{'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1.\n* Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using \n'shape'.", "'''\nSolution\n'''\ndf['label'] = df.label.map({'ham':0, 'spam':1})\nprint(df.shape)\ndf.head() # returns (rows, columns)", "Step 2.1: Bag of words\nWhat we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy. \nHere we'd like to introduce the Bag of Words(BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter. \nUsing a process which we will go through now, we can covert a collection of documents to a matrix, with each document being a row and each word(token) being the column, and the corresponding (row,column) values being the frequency of occurrance of each word or token in that document.\nFor example: \nLets say we have 4 documents as follows:\n['Hello, how are you!',\n'Win money, win from home.',\n'Call me now',\n'Hello, Call you tomorrow?']\nOur objective here is to convert this set of text to a frequency distribution matrix, as follows:\n<img src=\"images/countvectorizer.png\" height=\"542\" width=\"542\">\nHere as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document.\nLets break this down and see how we can do this conversion using a small set of documents.\nTo handle this, we will be using sklearns \n<a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer'> sklearn.feature_extraction.text.CountVectorizer </a> method which does the following:\n\nIt tokenizes the string(separates the string into individual words) and gives an integer ID to each token.\nIt counts the occurrance of each of those tokens.\n\n Please Note: \n\n\nThe CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the lowercase parameter which is by default set to True.\n\n\nIt also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the token_pattern parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters.\n\n\nThe third parameter to take note of is the stop_words parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the' etc. By setting this parameter value to english, CountVectorizer will automatically ignore all words(from our input text) that are found in the built in list of english stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam.\n\n\nWe will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data.\nStep 2.2: Implementing Bag of Words from scratch\nBefore we dive into scikit-learn's Bag of Words(BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes. \n Step 1: Convert all strings to their lower case form. \nLet's say we have a document set:\ndocuments = ['Hello, how are you!',\n 'Win money, win from home.',\n 'Call me now.',\n 'Hello, Call hello you tomorrow?']\n\n\n Instructions: \n* Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method.", "'''\nSolution:\n'''\ndocuments = ['Hello, how are you!',\n 'Win money, win from home.',\n 'Call me now.',\n 'Hello, Call hello you tomorrow?']\n\nlower_case_documents = []\nfor i in documents:\n lower_case_documents.append(i.lower())\nprint(lower_case_documents)", "Step 2: Removing all punctuations \n\n\nInstructions: \nRemove all punctuation from the strings in the document set. Save them into a list called \n'sans_punctuation_documents'.", "'''\nSolution:\n'''\nsans_punctuation_documents = []\nimport string\n\nfor i in lower_case_documents:\n sans_punctuation_documents.append(i.translate(str.maketrans('', '', string.punctuation)))\nprint(sans_punctuation_documents)", "Step 3: Tokenization \nTokenizing a sentence in a document set means splitting up a sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and the end of a word(for example we could use a single space as the delimiter for identifying words in our document set.)\n\n\nInstructions:\nTokenize the strings stored in 'sans_punctuation_documents' using the split() method. and store the final document set \nin a list called 'preprocessed_documents'.", "'''\nSolution:\n'''\npreprocessed_documents = []\nfor i in sans_punctuation_documents:\n preprocessed_documents.append(i.split(' '))\nprint(preprocessed_documents)", "Step 4: Count frequencies \nNow that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the Counter method from the Python collections library for this purpose. \nCounter counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list. \n\n\nInstructions:\nUsing the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequncy of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'.", "'''\nSolution\n'''\nfrequency_list = []\nimport pprint\nfrom collections import Counter\n\nfor i in preprocessed_documents:\n frequency_counts = Counter(i)\n frequency_list.append(frequency_counts)\npprint.pprint(frequency_list)", "Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.\nWe should now have a solid understanding of what is happening behind the scenes in the sklearn.feature_extraction.text.CountVectorizer method of scikit-learn. \nWe will now implement sklearn.feature_extraction.text.CountVectorizer method in the next step.\nStep 2.3: Implementing Bag of Words in scikit-learn\nNow that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step.", "'''\nHere we will look to create a frequency matrix on a smaller document set to make sure we understand how the \ndocument-term matrix generation happens. We have created a sample document set 'documents'.\n'''\ndocuments = ['Hello, how are you!',\n 'Win money, win from home.',\n 'Call me now.',\n 'Hello, Call hello you tomorrow?']", "Instructions:\nImport the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'.", "'''\nSolution\n'''\nfrom sklearn.feature_extraction.text import CountVectorizer\ncount_vector = CountVectorizer()", "Data preprocessing with CountVectorizer() \nIn Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are:\n\n\nlowercase = True\nThe lowercase parameter has a default value of True which converts all of our text to its lower case form.\n\n\ntoken_pattern = (?u)\\\\b\\\\w\\\\w+\\\\b\nThe token_pattern parameter has a default regular expression value of (?u)\\\\b\\\\w\\\\w+\\\\b which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words.\n\n\nstop_words\nThe stop_words parameter, if set to english will remove all words from our document set that match a list of English stop words which is defined in scikit-learn. Considering the size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not be setting this parameter value.\n\n\nYou can take a look at all the parameter values of your count_vector object by simply printing out the object as follows:", "'''\nPractice node:\nPrint the 'count_vector' object which is an instance of 'CountVectorizer()'\n'''\nprint(count_vector)", "Instructions:\nFit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words \nwhich have been categorized as features using the get_feature_names() method.", "'''\nSolution:\n'''\ncount_vector.fit(documents)\ncount_vector.get_feature_names()", "The get_feature_names() method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'.\n\n\n\nInstructions:\nCreate a matrix with the rows being each of the 4 documents, and the columns being each word. \nThe corresponding (row, column) value is the frequency of occurrance of that word(in the column) in a particular\ndocument(in the row). You can do this using the transform() method and passing in the document data set as the \nargument. The transform() method returns a matrix of numpy integers, you can convert this to an array using\ntoarray(). Call the array 'doc_array'", "'''\nSolution\n'''\ndoc_array = count_vector.transform(documents).toarray()\ndoc_array", "Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately.\n\n\nInstructions:\nConvert the array we obtained, loaded into 'doc_array', into a dataframe and set the column names to \nthe word names(which you computed earlier using get_feature_names(). Call the dataframe 'frequency_matrix'.", "'''\nSolution\n'''\nfrequency_matrix = pd.DataFrame(doc_array, \n columns = count_vector.get_feature_names())\nfrequency_matrix", "Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created. \nOne potential issue that can arise from using this method out of the box is the fact that if our dataset of text is extremely large(say if we have a large collection of news articles or email data), there will be certain values that are more common that others simply due to the structure of the language itself. So for example words like 'is', 'the', 'an', pronouns, grammatical contructs etc could skew our matrix and affect our analyis. \nThere are a couple of ways to mitigate this. One way is to use the stop_words parameter and set its value to english. This will automatically ignore all words(from our input text) that are found in a built in list of English stop words in scikit-learn.\nAnother way of mitigating this is by using the <a href = 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer'> sklearn.feature_extraction.text.TfidfVectorizer</a> method. This method is out of scope for the context of this lesson.\nStep 3.1: Training and testing sets\nNow that we have understood how to deal with the Bag of Words problem we can get back to our dataset and proceed with our analysis. Our first step in this regard would be to split our dataset into a training and testing set so we can test our model later. \n\n\nInstructions:\nSplit the dataset into a training and testing set by using the train_test_split method in sklearn. Split the data\nusing the following variables:\n* X_train is our training data for the 'sms_message' column.\n* y_train is our training data for the 'label' column\n* X_test is our testing data for the 'sms_message' column.\n* y_test is our testing data for the 'label' column\nPrint out the number of rows we have in each our training and testing data.", "'''\nSolution\n'''\n# split into training and testing sets\nfrom sklearn.cross_validation import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(df['sms_message'], \n df['label'], \n random_state=1)\n\nprint('Number of rows in the total set: {}'.format(df.shape[0]))\nprint('Number of rows in the training set: {}'.format(X_train.shape[0]))\nprint('Number of rows in the test set: {}'.format(X_test.shape[0]))", "Step 3.2: Applying Bag of Words processing to our dataset.\nNow that we have split the data, our next objective is to follow the steps from Step 2: Bag of words and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here:\n\nFirstly, we have to fit our training data (X_train) into CountVectorizer() and return the matrix.\nSecondly, we have to transform our testing data (X_test) to return the matrix. \n\nNote that X_train is our training data for the 'sms_message' column in our dataset and we will be using this to train our model. \nX_test is our testing data for the 'sms_message' column and this is the data we will be using(after transformation to a matrix) to make predictions on. We will then compare those predictions with y_test in a later step. \nFor now, we have provided the code that does the matrix transformations for you!", "'''\n[Practice Node]\n\nThe code for this segment is in 2 parts. Firstly, we are learning a vocabulary dictionary for the training data \nand then transforming the data into a document-term matrix; secondly, for the testing data we are only \ntransforming the data into a document-term matrix.\n\nThis is similar to the process we followed in Step 2.3\n\nWe will provide the transformed data to students in the variables 'training_data' and 'testing_data'.\n'''\n\n'''\nSolution\n'''\n# Instantiate the CountVectorizer method\ncount_vector = CountVectorizer()\n\n# Fit the training data and then return the matrix\ntraining_data = count_vector.fit_transform(X_train)\n\n# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()\ntesting_data = count_vector.transform(X_test)", "Step 4.1: Bayes Theorem implementation from scratch\nNow that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of a prior(the probabilities that we are aware of or that is given to us) and the posterior(the probabilities we are looking to compute using the priors). \nLet us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result. \nIn the medical field, such probabilies play a very important role as it usually deals with life and death situatuations. \nWe assume the following:\nP(D) is the probability of a person having Diabetes. It's value is 0.01 or in other words, 1% of the general population has diabetes(Disclaimer: these values are assumptions and are not reflective of any medical study).\nP(Pos) is the probability of getting a positive test result.\nP(Neg) is the probability of getting a negative test result.\nP(Pos|D) is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value 0.9. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate.\nP(Neg|~D) is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of 0.9 and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate.\nThe Bayes formula is as follows:\n<img src=\"images/bayes_formula.png\" height=\"242\" width=\"242\">\n\n\nP(A) is the prior probability of A occuring independantly. In our example this is P(D). This value is given to us.\n\n\nP(B) is the prior probability of B occuring independantly. In our example this is P(Pos).\n\n\nP(A|B) is the posterior probability that A occurs given B. In our example this is P(D|Pos). That is, the probability of an individual having diabetes, given that, that individual got a positive test result. This is the value that we are looking to calculate.\n\n\nP(B|A) is the likelihood probability of B occuring, given A. In our example this is P(Pos|D). This value is given to us.\n\n\nPutting our values into the formula for Bayes theorem we get:\nP(D|Pos) = (P(D) * P(Pos|D) / P(Pos)\nThe probability of getting a positive test result P(Pos) can be calulated using the Sensitivity and Specificity as follows:\nP(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]", "'''\nInstructions:\nCalculate probability of getting a positive test result, P(Pos)\n'''\n\n'''\nSolution (skeleton code will be provided)\n'''\n# P(D)\np_diabetes = 0.01\n\n# P(~D)\np_no_diabetes = 0.99\n\n# Sensitivity or P(Pos|D)\np_pos_diabetes = 0.9\n\n# Specificity or P(Neg/~D)\np_neg_no_diabetes = 0.9\n\n# P(Pos)\np_pos = (p_diabetes * p_pos_diabetes) + (p_no_diabetes * (1 - p_neg_no_diabetes))\nprint('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))", "Using all of this information we can calculate our posteriors as follows: \nThe probability of an individual having diabetes, given that, that individual got a positive test result:\nP(D/Pos) = (P(D) * Sensitivity)) / P(Pos)\nThe probability of an individual not having diabetes, given that, that individual got a positive test result:\nP(~D/Pos) = (P(~D) * (1-Specificity)) / P(Pos)\nThe sum of our posteriors will always equal 1.", "'''\nInstructions:\nCompute the probability of an individual having diabetes, given that, that individual got a positive test result.\nIn other words, compute P(D|Pos).\n\nThe formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)\n'''\n\n'''\nSolution\n'''\n# P(D|Pos)\np_diabetes_pos = (p_diabetes * p_pos_diabetes) / p_pos\nprint('Probability of an individual having diabetes, given that that individual got a positive test result is:\\\n',format(p_diabetes_pos)) \n\n'''\nInstructions:\nCompute the probability of an individual not having diabetes, given that, that individual got a positive test result.\nIn other words, compute P(~D|Pos).\n\nThe formula is: P(~D|Pos) = (P(~D) * P(Pos|~D) / P(Pos)\n\nNote that P(Pos/~D) can be computed as 1 - P(Neg/~D). \n\nTherefore:\nP(Pos/~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1\n'''\n\n'''\nSolution\n'''\n# P(Pos/~D)\np_pos_no_diabetes = 0.1\n\n# P(~D|Pos)\np_no_diabetes_pos = (p_no_diabetes * p_pos_no_diabetes) / p_pos\nprint 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\\\n,p_no_diabetes_pos", "Congratulations! You have implemented Bayes theorem from scratch. Your analysis shows that even if you get a positive test result, there is only a 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which of course is only an assumption.\n What does the term 'Naive' in 'Naive Bayes' mean ? \nThe term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of 0 and 1, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other. \nStep 4.2: Naive Bayes implementation from scratch\nNow that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than feature. \nLet's say that we have two political parties' candidates, 'Jill Stein' of the Green Party and 'Gary Johnson' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech:\n\nProbability that Jill Stein says 'freedom': 0.1 ---------> P(J|F)\nProbability that Jill Stein says 'immigration': 0.1 -----> P(J|I)\n\nProbability that Jill Stein says 'environment': 0.8 -----> P(J|E)\n\n\nProbability that Gary Johnson says 'freedom': 0.7 -------> P(G|F)\n\nProbability that Gary Johnson says 'immigration': 0.2 ---> P(G|I)\nProbability that Gary Johnson says 'environment': 0.1 ---> P(G|E)\n\nAnd let us also assume that the probablility of Jill Stein giving a speech, P(J) is 0.5 and the same for Gary Johnson, P(G) = 0.5. \nGiven this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes'theorem comes into play as we are considering two features, 'freedom' and 'immigration'.\nNow we are at a place where we can define the formula for the Naive Bayes' theorem:\n<img src=\"images/naivebayes.png\" height=\"342\" width=\"342\">\nHere, y is the class variable or in our case the name of the candidate and x1 through xn are the feature vectors or in our case the individual words. The theorem makes the assumption that each of the feature vectors or words (xi) are independent of each other.\nTo break this down, we have to compute the following posterior probabilities:\n\n\nP(J|F,I): Probability of Jill Stein saying the words Freedom and Immigration. \nUsing the formula and our knowledge of Bayes' theorem, we can compute this as follows: P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I). Here P(F,I) is the probability of the words 'freedom' and 'immigration' being said in a speech.\n\n\nP(G|F,I): Probability of Gary Johnson saying the words Freedom and Immigration. \nUsing the formula, we can compute this as follows: P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I)", "'''\nInstructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or\nP(F,I).\n\nThe first step is multiplying the probabilities of Jill Stein giving a speech with her individual \nprobabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text\n\nThe second step is multiplying the probabilities of Gary Johnson giving a speech with his individual \nprobabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text\n\nThe third step is to add both of these probabilities and you will get P(F,I).\n'''\n\n'''\nSolution: Step 1\n'''\n# P(J)\np_j = 0.5\n\n# P(J|F)\np_j_f = 0.1\n\n# P(J|I)\np_j_i = 0.1\n\np_j_text = p_j * p_j_f * p_j_i\nprint(p_j_text)\n\n'''\nSolution: Step 2\n'''\n# P(G)\np_g = 0.5\n\n# P(G|F)\np_g_f = 0.7\n\n# P(G|I)\np_g_i = 0.2\n\np_g_text = p_g * p_g_f * p_g_i\nprint(p_g_text)\n\n'''\nSolution: Step 3: Compute P(F,I) and store in p_f_i\n'''\np_f_i = p_j_text + p_g_text\nprint('Probability of words freedom and immigration being said are: ', format(p_f_i))", "Now we can compute the probability of P(J|F,I), that is the probability of Jill Stein saying the words Freedom and Immigration and P(G|F,I), that is the probability of Gary Johnson saying the words Freedom and Immigration.", "'''\nInstructions:\nCompute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(J|F) * P(J|I)) / P(F,I) and store it in a variable p_j_fi\n'''\n\n'''\nSolution\n'''\np_j_fi = p_j_text / p_f_i\nprint('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))\n\n'''\nInstructions:\nCompute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(G|F) * P(G|I)) / P(F,I) and store it in a variable p_g_fi\n'''\n\n'''\nSolution\n'''\np_g_fi = p_g_text / p_f_i\nprint('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi))", "And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compard the the 93.3% chance for Gary Johnson of the Libertarian party.\nAnother more generic example of Naive Bayes' in action is as when we search for the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually, in which case we would get results of images tagged with 'Sacramento' like pictures of city landscapes and images of 'Kings' which could be pictures of crowns or kings from history when what we are looking to get are images of the basketball team. This is a classic case of the search engine treating the words as independant entities and hence being 'naive' in its approach. \nApplying this to our problem of classifying messages as spam, the Naive Bayes algorithm looks at each word individually and not as associated entities with any kind of link between them. In the case of spam detectors, this usually works as there are certain red flag words which can almost guarantee its classification as spam, for example emails with words like 'viagra' are usually classified as spam.\nStep 5: Naive Bayes implementation using scikit-learn\nThankfully, sklearn has several Naive Bayes implementations that we can use and so we do not have to do the math from scratch. We will be using sklearns sklearn.naive_bayes method to make predictions on our dataset. \nSpecifically, we will be using the multinomial Naive Bayes implementation. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian(normal) distribution.", "'''\nInstructions:\n\nWe have loaded the training data into the variable 'training_data' and the testing data into the \nvariable 'testing_data'.\n\nImport the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier\n'naive_bayes'. You will be training the classifier using 'training_data' and y_train' from our split earlier. \n'''\n\n'''\nSolution\n'''\nfrom sklearn.naive_bayes import MultinomialNB\nnaive_bayes = MultinomialNB()\nnaive_bayes.fit(training_data, y_train)\n\n'''\nInstructions:\nNow that our algorithm has been trained using the training data set we can now make some predictions on the test data\nstored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.\n'''\n\n'''\nSolution\n'''\npredictions = naive_bayes.predict(testing_data)", "Now that predictions have been made on our test set, we need to check the accuracy of our predictions.\nStep 6: Evaluating our model\nNow that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, but first let's do quick recap of them.\n Accuracy measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).\n Precision tells us what proportion of messages we classified as spam, actually were spam.\nIt is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of\n[True Positives/(True Positives + False Positives)]\n Recall(sensitivity) tells us what proportion of messages that actually were spam were classified by us as spam.\nIt is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of\n[True Positives/(True Positives + False Negatives)]\nFor classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score.\nWe will be using all 4 metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing.", "'''\nInstructions:\nCompute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions\nyou made earlier stored in the 'predictions' variable.\n'''\n\n'''\nSolution\n'''\nfrom sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score\nprint('Accuracy score: ', format(accuracy_score(y_test, predictions)))\nprint('Precision score: ', format(precision_score(y_test, predictions)))\nprint('Recall score: ', format(recall_score(y_test, predictions)))\nprint('F1 score: ', format(f1_score(y_test, predictions)))", "Step 7: Conclusion\nOne of the major advantages that Naive Bayes has over other classification algorithms is its ability to handle an extremely large number of features. In our case, each word is treated as a feature and there are thousands of different words. Also, it performs well even with the presence of irrelevant features and is relatively unaffected by them. The other major advantage it has is its relative simplicity. Naive Bayes' works well right out of the box and tuning it's parameters is rarely ever necessary, except usually in cases where the distribution of the data is known. \nIt rarely ever overfits the data. Another important advantage is that its model training and prediction times are very fast for the amount of data it can handle. All in all, Naive Bayes' really is a gem of an algorithm!\nCongratulations! You have succesfully designed a model that can efficiently predict if an SMS message is spam or not!\nThank you for learning with us!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
apache-2.0
[ "Beam Notebooks and Dataframes Demo\nThis example demonstrates how to set up an Apache Beam pipeline that reads from a\nGoogle Cloud Storage file containing text from Shakespeare's work King Lear, \ntokenizes the text lines into individual words, and performs a frequency count on each of those words. \nWe will perform the aggregation operations using the Beam Dataframes API, which allows us to use Pandas-like syntax to write your transformations. We will see how we can easily translate from using Pandas locally to using Dataframes in Apache Beam (which could then be run on Dataflow\nFor details about the Apache Beam Dataframe API, see the Documentation.\nWe first start with the necessary imports:", "# Python's regular expression library\nimport re\n\n# Beam and interactive Beam imports\nimport apache_beam as beam\nfrom apache_beam.runners.interactive.interactive_runner import InteractiveRunner\nimport apache_beam.runners.interactive.interactive_beam as ib\n\n# Dataframe API imports\nfrom apache_beam.dataframe.convert import to_dataframe\nfrom apache_beam.dataframe.convert import to_pcollection", "We will be using the re library to parse our lines of text. We will import the InteractiveRunner class for executing out pipeline in the notebook environment and the interactive_beam module for exploring the PCollections. Finally we will import two functions from the Dataframe API, to_dataframe and to_pcollection. to_dataframe converts your (schema-aware) PCollection into a dataframe and to_pcollection goes back in the other direction to a PCollection of type beam.Row.\nWe will first create a composite PTransform ReadWordsFromText to read in a file pattern (file_pattern), use the ReadFromText source to read in the files, and then FlatMap with a lambda to parse the line into individual words.", "class ReadWordsFromText(beam.PTransform):\n \n def __init__(self, file_pattern):\n self._file_pattern = file_pattern\n \n def expand(self, pcoll):\n return (pcoll.pipeline\n | beam.io.ReadFromText(self._file_pattern)\n | beam.FlatMap(lambda line: re.findall(r'[\\w\\']+', line.strip(), re.UNICODE)))", "To be able to process our data in the notebook environment and explore the PCollections, we will use the interactive runner. We create this pipeline object in the same manner as usually, but passing in InteractiveRunner() as the runner.", "p = beam.Pipeline(InteractiveRunner())", "Now we're ready to start processing our data! We first apply our ReadWordsFromText transform to read in the lines of text from Google Cloud Storage and parse into individual words.", "words = p | 'ReadWordsFromText' >> ReadWordsFromText('gs://apache-beam-samples/shakespeare/kinglear.txt')", "Now we will see some capabilities of the interactive runner. First we can use ib.show to view the contents of a specific PCollection from any point of our pipeline.", "ib.show(words)", "Great! We see that we have 28,001 words in our PCollection and we can view the words in our PCollection. \nWe can also view the current DAG for our graph by using the ib.show_graph() method. Note that here we pass in the pipeline object rather than a PCollection", "ib.show_graph(p)", "In the above graph, the rectanglar boxes correspond to PTransforms and the circles correspond to PCollections. \nNext we will add a simple schema to our PCollection and convert the PCollection into a dataframe using the to_dataframe method.", "word_rows = words | 'ToRows' >> beam.Map(lambda word: beam.Row(word=word))\n\ndf = to_dataframe(word_rows)", "We can now explore our PCollection as a Pandas-like dataframe! One of the first things many data scientists do as soon as they load data into a dataframe is explore the first few rows of data using the head method. Let's see what happens here.", "df.head()", "Notice that we got a very specific type of error! The WontImplementError is for Pandas methods that will not be implemented for Beam dataframes. These are methods that violate the Beam model for one reason or another. For example, in this case the head method depends on the order of the dataframe. However, this is in conflict with the Beam model. \nOur goal however is to count the number of times each word appears in the ingested text. First we will add a new column in our dataframe named count with a value of 1 for all rows. After that, we will group by the value of the word column and apply the sum method for the count field.", "df['count'] = 1\ncounted = df.groupby('word').sum()", "That's it! It looks exactly like the code one would write when using Pandas. However, what does this look like in the DAG for the pipeline? We can see this by executing ib.show_graph(p) as before.", "ib.show_graph(p)", "We can see that the dataframe manipulations added a new PTransform to our pipeline. Let us convert the dataframe back to a PCollection so we can use ib.show to view the contents.", "word_counts = to_pcollection(counted, include_indexes=True)\nib.show(word_counts)", "Great! We can now see that the words have been successfully counted. Finally let us build in a sink into the pipeline. We can do this in two ways. If we wish to write to a CSV file, then we can use the dataframe's to_csv method. We can also use the WriteToText transform after converting back to a PCollection. Let's do both and explore the outputs.", "counted.to_csv('from_df.csv')\n_ = word_counts | beam.io.WriteToText('from_pcoll.csv')\n\nib.show_graph(p)", "Note that we can see the branching with two different sinks, also we can see where the dataframe is converted back to a PCollection. We can run our entire pipeline by using p.run() as normal.", "p.run()", "Let us now look at the beginning of the CSV files using the bash line magic with the head command to compare.", "!head from_df*\n\n!head from_pcoll*", "We (functionally) end up with the same information as expected! The big difference is in how the results are presented. In the case of the output from the WriteToText connector, we did not convert our PCollection from objects of type Row. We could write a simple intermediate transform to pull out the properties of the Row object into a comma-seperated representation. For example:\ndef row_to_csv(element):\n output = f\"{element.word},{element.count}\"\n return output\nThe we could replace the code _ = word_counts | beam.io.WriteToText('from_pcoll.csv') with\n_ = word_counts | beam.Map(row_to_csv)\n | beam.io.WriteToText('from_pcoll.csv')\nHowever, note that the to_csv method for the dataframe took care of this conversion for us." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
biothings/biothings_explorer
jupyter notebooks/Multi intermediate nodes query.ipynb
apache-2.0
[ "Introduction\nThis notebook demonstrates how BioThings Explorer can be used to execute queries having more than one intermediate nodes:\nThe query starts from drug \"Anisindione\", the two intermediate nodes with be *Gene and DiseaseOrPhenotypicFeature\", the final output will be \"PhenotypicFeature\".\nBackground: BioThings Explorer can answer two classes of queries -- \"EXPLAIN\" and \"PREDICT\". EXPLAIN queries are described in EXPLAIN_demo.ipynb, and PREDICT queries are described in PREDICT_demo.ipynb. Here, we describe PREDICT queries and how to use BioThings Explorer to execute them. A more detailed overview of the BioThings Explorer systems is provided in these slides.\nTo experiment with an executable version of this notebook, load it in Google Colaboratory.\nStep 0: Load BioThings Explorer modules\nInstall the biothings_explorer and biothings_schema packages, as described in this README. This only needs to be done once (but including it here for compability with colab).", "!pip install git+https://github.com/biothings/biothings_explorer#egg=biothings_explorer", "Next, import the relevant modules:\n\nHint: Find corresponding bio-entity representation used in BioThings Explorer based on user input (could be any database IDs, symbols, names)\nFindConnection: Find intermediate bio-entities which connects user specified input and output", "from biothings_explorer.hint import Hint\nfrom biothings_explorer.user_query_dispatcher import FindConnection\nimport nest_asyncio\nnest_asyncio.apply()", "Step 1: Find representation of \"Anisindione\" in BTE\nIn this step, BioThings Explorer translates our query string \"Anisindioine\" into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the Hint module will be the correct item, but you should confirm that using the identifiers shown.\nSearch terms can correspond to any child of BiologicalEntity from the Biolink Model, including DiseaseOrPhenotypicFeature (e.g., \"lupus\"), ChemicalSubstance (e.g., \"acetaminophen\"), Gene (e.g., \"CDK2\"), BiologicalProcess (e.g., \"T cell differentiation\"), and Pathway (e.g., \"Citric acid cycle\").", "ht = Hint()\nanisindione = ht.query(\"Anisindione\")['ChemicalSubstance'][0]\n\nanisindione", "Step 2: Find phenotypes that are associated with Anisindione through Gene and DiseaseOrPhenotypicFeature as intermediate nodes\nIn this section, we find all paths in the knowledge graph that connect Anisindione to any entity that is a phenotypic feature. To do that, we will use FindConnection. This class is a convenient wrapper around two advanced functions for query path planning and query path execution. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months.", "fc = FindConnection(input_obj=anisindione, \n output_obj='PhenotypicFeature', \n intermediate_nodes=['Gene', 'Disease'])\n\nfc.connect(verbose=True)\n\ndf = fc.display_table_view()\n\ndf.head()", "The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (ANISINDIONE) to an intermediate node (a gene or protein) to another intermediate node (a DisseaseOrPhenotypicFeature) to an ending node (a Phenotypic Feature). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the phenotype label) is None, and specifically focus on paths with specific mechanistic predicates target and causes.", "dfFilt = df.loc[df['output_name'].notnull()].query('pred1 == \"physically_interacts_with\" and pred2 == \"prevents\"')\n\ndfFilt" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AlJohri/DAT-DC-12
homework/homework1.ipynb
mit
[ "Homework 1\nThe goal of this homework is to ensure you have a decent understanding of Python AND know how to read and interpret documentation. Please make heavy use of Google, StackOverflow, the Python 3 Documentation, and of course the help function.\nEach problem has an associated test via the assert statement that will tell you if you implemented it properly.\nFeel free to use any previously functions for future problems (i.e. feel free to use a square function when implementing square_and_add_one)\nRun this First\nThis line import several useful modules into memory for later use.", "import os, sys, csv, json, math, random, collections, time, itertools, functools", "This line create a CSV (comma seperated file) called hw1data.csv in the current working directory.", "%%file hw1data.csv\nid,sex,weight\n1,M,190\n2,F,120\n3,F,110\n4,M,150\n5,O,120\n6,M,120\n7,F,140", "Basic", "def double(x):\n \"\"\"\n double the value x\n \"\"\"\n\nassert double(10) == 20\n\ndef apply_to_100(f):\n \"\"\"\n runs some abitrary function f on the value 100 and returns the output\n \"\"\"\n\nassert(apply_to_100(double) == 200)\n\n\"\"\"\ncreate a an anonymous function using lambda that takes some value x and adds 1 to x\n\"\"\"\nadd_one = lambda x: x\n\nassert apply_to_100(add_one) == 101\n\ndef get_up_to_first_three_elements(l):\n \"\"\"\n get up to the first three elements in list l\n \"\"\"\n return \n\nassert get_up_to_first_three_elements([1,2,3,4]) == [1,2,3]\nassert get_up_to_first_three_elements([1,2]) == [1,2]\nassert get_up_to_first_three_elements([1]) == [1]\nassert get_up_to_first_three_elements([]) == []\n\ndef caesar_cipher(s, key):\n \"\"\"\n https://www.hackerrank.com/challenges/caesar-cipher-1\n Given an unencrypted string s and an encryption key (an integer), compute the caesar cipher.\n \n Basically just shift each letter by the value of key. A becomes C if key = 2. This is case sensitive.\n \n What is a Caesar Cipher? https://en.wikipedia.org/wiki/Caesar_cipher\n \n Hint: ord function https://docs.python.org/2/library/functions.html#ord\n Hint: chr function https://docs.python.org/2/library/functions.html#chr\n\n print(ord('A'), ord('Z'), ord('a'), ord('z'))\n print(chr(65), chr(90), chr(97), chr(122))\n \"\"\"\n\n new_s = []\n\n for c in s:\n if ord('A') <= ord(c) <= ord('Z'):\n new_c = chr(ord('A') + (ord(c) - ord('A') + 2) % 26)\n new_s.append(new_c)\n elif ord('a') <= ord(c) <= ord('z'):\n new_c = chr(ord('a') + (ord(c) - ord('a') + 2) % 26)\n new_s.append(new_c)\n else:\n new_s.append(c)\n \n return \"\".join(new_s)\n\nassert caesar_cipher(\"middle-Outz\", 2) == \"okffng-Qwvb\"", "Working with Files", "def create_list_of_lines_in_hw1data():\n \"\"\"\n Read each line of hw1data.csv into a list and return the list of lines.\n Remove the newline character (\"\\n\") at the end of each line.\n \n What is a newline character? https://en.wikipedia.org/wiki/Newline\n \n Hint: Reading a File (https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects)\n \"\"\"\n with open(\"hw1data.csv\", \"r\") as f:\n return [line.strip() for line in f]\n # lines = f.read().splitlines() # alternative 1\n # lines = [line.strip() for line in f.readlines()] # altenative 2\n\nassert create_list_of_lines_in_hw1data() == [\n \"id,sex,weight\", \"1,M,190\", \"2,F,120\", \"3,F,110\",\n \"4,M,150\", \"5,O,120\", \"6,M,120\", \"7,F,140\",\n ]\n\ndef filter_to_lines_with_just_M():\n \"\"\"\n Read each line in like last time except filter down to only the rows with \"M\" in them.\n \n Hint: Filter using List Comprehensions (http://www.diveintopython.net/power_of_introspection/filtering_lists.html)\n \"\"\"\n lines = create_list_of_lines_in_hw1data()\n return [line for line in lines ]\n\nassert filter_to_lines_with_just_M() == [\"1,M,190\", \"4,M,150\", \"6,M,120\"]\n\ndef filter_to_lines_with_just_F():\n \"\"\"\n Read each line in like last time except filter down to only the rows with \"F\" in them.\n \"\"\"\n lines = create_list_of_lines_in_hw1data()\n return [line for line in lines ]\n\nassert filter_to_lines_with_just_F() == [\"2,F,120\", \"3,F,110\", \"7,F,140\"]\n\ndef filter_to_lines_with_any_sex(sex):\n \"\"\"\n Read each line in like last time except filter down to only the rows with \"M\" in them.\n \"\"\"\n lines = create_list_of_lines_in_hw1data()\n return [line for line in lines ]\n\nassert filter_to_lines_with_any_sex(\"O\") == [\"5,O,120\"]\n\ndef get_average_weight():\n \"\"\"\n This time instead of just reading the file, parse the csv using csv.reader.\n \n get the average weight of all people rounded to the hundredth place\n \n Hint: https://docs.python.org/3/library/csv.html#csv.reader\n \"\"\"\n weights = []\n with open(\"hw1data.csv\", \"r\") as f:\n reader = csv.reader(f)\n next(reader)\n for row in reader:\n print(int(row[2]))\n return round(avg_weight, 2)\n\nassert get_average_weight() == 135.71\n\ndef create_list_of_dicts_in_hw1data():\n \"\"\"\n create list of dicts for each line in the hw1data (except the header)\n \"\"\"\n with open(\"hw1data.csv\", \"r\") as f:\n return []\n\nassert create_list_of_dicts_in_hw1data() == [\n {\"id\": \"1\", \"sex\": \"M\", \"weight\": \"190\"},\n {\"id\": \"2\", \"sex\": \"F\", \"weight\": \"120\"},\n {\"id\": \"3\", \"sex\": \"F\", \"weight\": \"110\"},\n {\"id\": \"4\", \"sex\": \"M\", \"weight\": \"150\"},\n {\"id\": \"5\", \"sex\": \"O\", \"weight\": \"120\"},\n {\"id\": \"6\", \"sex\": \"M\", \"weight\": \"120\"},\n {\"id\": \"7\", \"sex\": \"F\", \"weight\": \"140\"}\n ]", "Project Euler", "def sum_of_multiples_of_three_and_five_below_1000():\n \"\"\"\n https://projecteuler.net/problem=1\n If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9.\n The sum of these multiples is 23.\n Find the sum of all the multiples of 3 or 5 below 1000.\n\n Hint: Modulo Operator (https://docs.python.org/3/reference/expressions.html#binary-arithmetic-operations)\n Hint: List Comprehension (https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)\n Hint: Range Function (https://docs.python.org/3/library/functions.html#func-range)\n \"\"\"\n return \n\n\ndef sum_of_even_fibonacci_under_4million():\n \"\"\"\n https://projecteuler.net/problem=2\n Each new term in the Fibonacci sequence is generated by adding the previous two terms.\n By starting with 1 and 2, the first 10 terms will be:\n 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...\n By considering the terms in the Fibonacci sequence whose values do not exceed four million,\n find the sum of the even-valued terms.\n \n Hint: While Loops (http://learnpythonthehardway.org/book/ex33.html)\n \"\"\"\n the_sum = 0\n a, b = 1, 2\n while b < 4000000:\n \n return the_sum\n\ndef test_all():\n assert sum_of_multiples_of_three_and_five_below_1000() == 233168\n assert sum_of_even_fibonacci_under_4million() == 4613732\n \ntest_all()", "Strings", "from collections import Counter\n\ndef remove_punctuation(s):\n \"\"\"remove periods, commas, and semicolons\n \"\"\"\n return s.replace()\n\ndef tokenize(s):\n \"\"\"return a list of lowercased tokens (words) in a string without punctuation\n \"\"\"\n return remove_punctuation(s.lower())\n\ndef word_count(s):\n \"\"\"count the number of times each word (lowercased) appears and return a dictionary\n \"\"\"\n return Counter(words)\n\ndef test_all():\n test_string1 = \"A quick brown Al, jumps over the lazy dog; sometimes...\"\n test_string2 = \"This this is a sentence sentence with words multiple multiple times.\"\n \n # ---------------------------------------------------------------------------------- #\n \n test_punctuation1 = \"A quick brown Al jumps over the lazy dog sometimes\"\n test_punctuation2 = \"This this is a sentence sentence with words multiple multiple times\"\n \n assert remove_punctuation(test_string1) == test_punctuation1\n assert remove_punctuation(test_string2) == test_punctuation2\n \n # ---------------------------------------------------------------------------------- #\n \n test_tokens1 = [\"a\", \"quick\", \"brown\", \"al\", \"jumps\", \"over\", \"the\", \"lazy\", \"dog\", \"sometimes\"]\n test_tokens2 = [\n \"this\", \"this\", \"is\", \"a\", \"sentence\", \"sentence\", \"with\", \"words\", \"multiple\", \"multiple\", \"times\"\n ]\n\n assert tokenize(test_string1) == test_tokens1\n assert tokenize(test_string2) == test_tokens2\n\n # ---------------------------------------------------------------------------------- #\n\n test_wordcount1 = {\n \"a\": 1, \"quick\": 1, \"brown\": 1, \"al\": 1, \"jumps\": 1, \"over\": 1, \"the\": 1, \"lazy\": 1, \"dog\": 1, \"sometimes\": 1\n }\n test_wordcount2 = {\"this\": 2, \"is\": 1, \"a\": 1, \"sentence\": 2, \"with\": 1, \"words\": 1, \"multiple\": 2, \"times\": 1}\n \n assert word_count(test_string1) == test_wordcount1\n assert word_count(test_string2) == test_wordcount2\n\ntest_all()", "Linear Algebra\nPlease find the following empty functions and write the code to complete the logic.\nThese functions are focused around implementing vector algebra operations. The vectors can be of any length. If a function accepts two vectors, assume they are the same length.\nKhan Academy has a decent introduction:\n[https://www.khanacademy.org/math/linear-algebra/vectors_and_spaces/vectors/v/vector-introduction-linear-algebra]", "def vector_add(v, w):\n \"\"\"adds two vectors componentwise and returns the result\n hint: use zip()\n v + w = [4, 5, 1] + [9, 8, 1] = [13, 13, 2]\n \"\"\"\n return []\n\ndef vector_subtract(v, w):\n \"\"\"subtracts two vectors componentwise and returns the result\n hint use zip()\n v + w = [4, 5, 1] - [9, 8, 1] = [-5, -3, 0]\n \"\"\"\n return []\n\ndef vector_sum(vectors):\n \"\"\"sums a list of vectors or arbitrary length and returns the resulting vector\n [[1,2], [4,5], [8,3]] = [13,10]\n \"\"\"\n v_copy = list(vectors)\n result = v_copy.pop()\n for v in v_copy:\n result = \n return result\n\ndef scalar_multiply(c, v):\n \"\"\"returns a vector where components are multplied by c\"\"\"\n return []\n\ndef dot(v, w):\n \"\"\"dot product v.w\n v_1 * w_1 + ... + v_n * w_n\"\"\"\n return sum()\n\ndef sum_of_squares(v):\n \"\"\" v.v square each component and sum them\n v_1 * v_1 + ... + v_n * v_n\"\"\"\n return \n\ndef magnitude(v):\n \"\"\"the Norm of a vector, the sqrt of the sum of the squares of the components\"\"\"\n return math.sqrt()\n\ndef distance(v, w):\n \"\"\" the distance of v to w\"\"\"\n return \n\ndef cross_product(v, w): # or outer_product(v, w)\n \"\"\"Bonus:\n The outer/cross product of v and w\"\"\"\n for i in v:\n yield scalar_multiply(i, w)\n\ndef test_all():\n test_v = [4, 5, 1] \n test_w = [9, 8, 1] \n list_v = [[1,2], [4,5], [8,3]]\n \n print(\"Vector Add\", test_v, test_w, vector_add(test_v, test_w))\n print(\"Vector Subtract\", test_v, test_w, vector_subtract(test_v, test_w))\n print(\"Vector Sum\", list_v, vector_sum(list_v))\n print(\"Scalar Multiply\", 3, test_w, scalar_multiply(3, test_w))\n print(\"Dot\", test_v, test_w, dot(test_v, test_w))\n print(\"Sum of Squares\", test_v, sum_of_squares(test_v))\n print(\"Magnitude\", test_v, magnitude(test_v))\n print(\"Distance\", test_v, test_w, distance(test_v, test_w))\n print(\"Cross Product\", list(cross_product(test_v, test_w)))\n\n assert vector_add(test_v, test_w) == [13, 13, 2]\n assert vector_subtract(test_v, test_w) == [-5, -3, 0]\n assert vector_sum(list_v) == [13,10] \n assert scalar_multiply(3, test_w) == [27, 24, 3]\n assert dot(test_v, test_w) == 77\n assert sum_of_squares(test_v) == 42\n assert magnitude(test_v) == 6.48074069840786\n assert distance(test_v, test_w) == 5.830951894845301\n assert list(cross_product(test_v, test_w)) == [[36, 32, 4], [45, 40, 5], [9, 8, 1]]\n\ntest_all()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
molpopgen/fwdpy
docs/examples/trajectories.ipynb
gpl-3.0
[ "Tracking mutation frequencies", "%matplotlib inline\n%pylab inline\nimport fwdpy as fp\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport copy", "Run a simulation", "nregions = [fp.Region(0,1,1),fp.Region(2,3,1)]\nsregions = [fp.ExpS(1,2,1,-0.1),fp.ExpS(1,2,0.01,0.001)]\nrregions = [fp.Region(0,3,1)]\nrng = fp.GSLrng(101)\npopsizes = np.array([1000],dtype=np.uint32)\npopsizes=np.tile(popsizes,10000)\n#Initialize a vector with 1 population of size N = 1,000\npops=fp.SpopVec(1,1000)\n#This sampler object will record selected mutation\n#frequencies over time. A sampler gets the length\n#of pops as a constructor argument because you \n#need a different sampler object in memory for\n#each population.\nsampler=fp.FreqSampler(len(pops))\n#Record mutation frequencies every generation\n#The function evolve_regions sampler takes any\n#of fwdpy's temporal samplers and applies them.\n#For users familiar with C++, custom samplers will be written,\n#and we plan to allow for custom samplers to be written primarily \n#using Cython, but we are still experimenting with how best to do so.\nrawTraj=fp.evolve_regions_sampler(rng,pops,sampler,\n popsizes[0:],0.001,0.001,0.001,\n nregions,sregions,rregions,\n #The one means we sample every generation.\n 1)\n\nrawTraj = [i for i in sampler]\n#This example has only 1 set of trajectories, so let's make a variable for thet\n#single replicate\ntraj=rawTraj[0]\nprint traj.head()\nprint traj.tail()\nprint traj.freq.max()", "Group mutation trajectories by position and effect size\nMax mutation frequencies", "mfreq = traj.groupby(['pos','esize']).max().reset_index()\n#Print out info for all mutations that hit a frequency of 1 (e.g., fixed)\nmfreq[mfreq['freq']==1]", "The only fixation has an 'esize' $> 0$, which means that it was positively selected,\nFrequency trajectory of fixations", "#Get positions of mutations that hit q = 1\nmpos=mfreq[mfreq['freq']==1]['pos']\n\n#Frequency trajectories of fixations\nfig = plt.figure()\nax = plt.subplot(111)\nplt.xlabel(\"Time (generations)\")\nplt.ylabel(\"Mutation frequency\")\nax.set_xlim(traj['generation'].min(),traj['generation'].max())\nfor i in mpos:\n plt.plot(traj[traj['pos']==i]['generation'],traj[traj['pos']==i]['freq'])\n\n#Let's get histogram of effect sizes for all mutations that did not fix\nfig = plt.figure()\nax = plt.subplot(111)\nplt.xlabel(r'$s$ (selection coefficient)')\nplt.ylabel(\"Number of mutations\")\nmfreq[mfreq['freq']<1.0]['esize'].hist()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
florianwittkamp/FD_ACOUSTIC
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
gpl-3.0
[ "FD_2D_DX4_DT2_fast 2-D acoustic Finite-Difference modelling\nGNU General Public License v3.0\nAuthor: Florian Wittkamp\nFinite-Difference acoustic seismic wave simulation\nDiscretization of the first-order acoustic wave equation\nTemporal second-order accuracy $O(\\Delta T^2)$\nSpatial fourth-order accuracy $O(\\Delta X^4)$\nInitialisation", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "Input Parameter", "# Discretization\nc1=30 # Number of grid points per dominant wavelength\nc2=0.2 # CFL-Number\nnx=300 # Number of grid points in X\nny=300 # Number of grid points in Y\nT=1 # Total propagation time\n\n# Source Signal\nf0= 5 # Center frequency Ricker-wavelet\nq0= 100 # Maximum amplitude Ricker-Wavelet\nxscr = 150 # Source position (in grid points) in X\nyscr = 150 # Source position (in grid points) in Y\n\n# Receiver\nxrec1=150; yrec1=120; # Position Reciever 1 (in grid points)\nxrec2=150; yrec2=150; # Position Reciever 2 (in grid points)\nxrec3=150; yrec3=180;# Position Reciever 3 (in grid points)\n\n# Velocity and density\nmodell_v = 3000*np.ones((ny,nx))\nrho=2.2*np.ones((ny,nx))", "Preparation", "# Init wavefields\nvx=np.zeros(shape = (ny,nx))\nvy=np.zeros(shape = (ny,nx))\np=np.zeros(shape = (ny,nx))\nvx_x=np.zeros(shape = (ny,nx))\nvy_y=np.zeros(shape = (ny,nx))\np_x=np.zeros(shape = (ny,nx))\np_y=np.zeros(shape = (ny,nx))\n\n# Calculate first Lame-Paramter\nl=rho * modell_v * modell_v\n\ncmin=min(modell_v.flatten()) # Lowest P-wave velocity\ncmax=max(modell_v.flatten()) # Highest P-wave velocity\nfmax=2*f0 # Maximum frequency\ndx=cmin/(fmax*c1) # Spatial discretization (in m)\ndy=dx # Spatial discretization (in m)\ndt=dx/(cmax)*c2 # Temporal discretization (in s)\nlampda_min=cmin/fmax # Smallest wavelength\n\n# Output model parameter:\nprint(\"Model size: x:\",dx*nx,\"in m, y:\",dy*ny,\"in m\")\nprint(\"Temporal discretization: \",dt,\" s\")\nprint(\"Spatial discretization: \",dx,\" m\")\nprint(\"Number of gridpoints per minimum wavelength: \",lampda_min/dx)", "Create space and time vector", "x=np.arange(0,dx*nx,dx) # Space vector in X\ny=np.arange(0,dy*ny,dy) # Space vector in Y\nt=np.arange(0,T,dt) # Time vector\nnt=np.size(t) # Number of time steps\n\n# Plotting model\nfig, (ax1, ax2) = plt.subplots(1, 2)\nfig.subplots_adjust(wspace=0.4,right=1.6)\nax1.plot(x,modell_v)\nax1.set_ylabel('VP in m/s')\nax1.set_xlabel('Depth in m')\nax1.set_title('P-wave velocity')\n\nax2.plot(x,rho)\nax2.set_ylabel('Density in g/cm^3')\nax2.set_xlabel('Depth in m')\nax2.set_title('Density');\n", "Source signal - Ricker-wavelet", "tau=np.pi*f0*(t-1.5/f0)\nq=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)\n\n# Plotting source signal\nplt.figure(3)\nplt.plot(t,q)\nplt.title('Source signal Ricker-Wavelet')\nplt.ylabel('Amplitude')\nplt.xlabel('Time in s')\nplt.draw()", "Time stepping", "# Init Seismograms\nSeismogramm=np.zeros((3,nt)); # Three seismograms\n\n# Calculation of some coefficients\ni_dx=1.0/(dx)\ni_dy=1.0/(dy)\nc1=9.0/(8.0*dx)\nc2=1.0/(24.0*dx)\nc3=9.0/(8.0*dy)\nc4=1.0/(24.0*dy)\nc5=1.0/np.power(dx,3)\nc6=1.0/np.power(dy,3)\nc7=1.0/np.power(dx,2)\nc8=1.0/np.power(dy,2)\nc9=np.power(dt,3)/24.0\n\n# Prepare slicing parameter:\nkxM2=slice(5-2,nx-4-2)\nkxM1=slice(5-1,nx-4-1)\nkx=slice(5,nx-4)\nkxP1=slice(5+1,nx-4+1)\nkxP2=slice(5+2,nx-4+2)\n\nkyM2=slice(5-2,ny-4-2)\nkyM1=slice(5-1,ny-4-1)\nky=slice(5,ny-4)\nkyP1=slice(5+1,ny-4+1)\nkyP2=slice(5+2,ny-4+2)\n\n## Time stepping\nprint(\"Starting time stepping...\")\nfor n in range(2,nt):\n \n # Inject source wavelet\n p[yscr,xscr]=p[yscr,xscr]+q[n]\n \n # Update velocity\n p_x[ky,kx]=c1*(p[ky,kxP1]-p[ky,kx])-c2*(p[ky,kxP2]-p[ky,kxM1])\n p_y[ky,kx]=c3*(p[kyP1,kx]-p[ky,kx])-c4*(p[kyP2,kx]-p[kyM1,kx])\n \n vx=vx-dt/rho*p_x\n vy=vy-dt/rho*p_y\n \n # Update pressure\n vx_x[ky,kx]=c1*(vx[ky,kx]-vx[ky,kxM1])-c2*(vx[ky,kxP1]-vx[ky,kxM2])\n vy_y[ky,kx]=c3*(vy[ky,kx]-vy[kyM1,kx])-c4*(vy[kyP1,kx]-vy[kyM2,kx])\n \n p=p-l*dt*(vx_x+vy_y)\n \n # Save seismograms\n Seismogramm[0,n]=p[yrec1,xrec1]\n Seismogramm[1,n]=p[yrec2,xrec2]\n Seismogramm[2,n]=p[yrec3,xrec3]\n \nprint(\"Finished time stepping!\")", "Save seismograms", "## Save seismograms\nnp.save(\"Seismograms/FD_2D_DX4_DT2_fast\",Seismogramm)", "Plotting", "## Image plot\nfig, ax = plt.subplots(1,1)\nimg = ax.imshow(p);\nax.set_title('P-Wavefield')\nax.set_xticks(range(0,nx+1,int(nx/5)))\nax.set_yticks(range(0,ny+1,int(ny/5)))\nax.set_xlabel('Grid-points in X')\nax.set_ylabel('Grid-points in Y')\nfig.colorbar(img)\n\n## Plot seismograms\nfig, (ax1, ax2, ax3) = plt.subplots(3, 1)\nfig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )\n\nax1.plot(t,Seismogramm[0,:])\nax1.set_title('Seismogram 1')\nax1.set_ylabel('Amplitude')\nax1.set_xlabel('Time in s')\nax1.set_xlim(0, T)\n\nax2.plot(t,Seismogramm[1,:])\nax2.set_title('Seismogram 2')\nax2.set_ylabel('Amplitude')\nax2.set_xlabel('Time in s')\nax2.set_xlim(0, T)\n\nax3.plot(t,Seismogramm[2,:])\nax3.set_title('Seismogram 3')\nax3.set_ylabel('Amplitude')\nax3.set_xlabel('Time in s')\nax3.set_xlim(0, T);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marcelomiky/PythonCodes
scikit-learn/scikit-learn-book/Chapter 3 - Unsupervised Learning - Principal Component Analysis.ipynb
mit
[ "Learning Scikit-learn: Machine Learning in Python\nIPython Notebook for Chapter 3: Unsupervised Learning - Principal Component Analysis\nPrincipal Component Analysis (PCA) is useful for exploratory data analysis before building predictive models.\nFor our learning methods, PCA will allow us to reduce a high-dimensional space into a low-dimensional one while preserving as much variance as possible. We will use the handwritten digits recognition problem to show how it can be used\nStart by importing numpy, scikit-learn, and pyplot, the Python libraries we will be using in this chapter. Show the versions we will be using (in case you have problems running the notebooks).", "%pylab inline\nimport IPython\nimport sklearn as sk\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nprint 'IPython version:', IPython.__version__\nprint 'numpy version:', np.__version__\nprint 'scikit-learn version:', sk.__version__\nprint 'matplotlib version:', matplotlib.__version__", "Import the digits dataset (http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html) and show its attributes", "from sklearn.datasets import load_digits\ndigits = load_digits()\nX_digits, y_digits = digits.data, digits.target\nprint digits.keys()", "Let's show how the digits look like...", "n_row, n_col = 2, 5\n\ndef print_digits(images, y, max_n=10):\n # set up the figure size in inches\n fig = plt.figure(figsize=(2. * n_col, 2.26 * n_row))\n i=0\n while i < max_n and i < images.shape[0]:\n p = fig.add_subplot(n_row, n_col, i + 1, xticks=[], yticks=[])\n p.imshow(images[i], cmap=plt.cm.bone, interpolation='nearest')\n # label the image with the target value\n p.text(0, -1, str(y[i]))\n i = i + 1\n \nprint_digits(digits.images, digits.target, max_n=10)", "Now, let's define a function that will plot a scatter with the two-dimensional points that will be obtained by a PCA transformation. Our data points will also be colored according to their classes. Recall that the target class will not be used to perform the transformation; we want to investigate if the distribution after PCA reveals the distribution of the different classes, and if they are clearly separable. We will use ten different colors for each of the digits, from 0 to 9.\nFind components and plot first and second components", "def plot_pca_scatter():\n colors = ['black', 'blue', 'purple', 'yellow', 'white', 'red', 'lime', 'cyan', 'orange', 'gray']\n for i in xrange(len(colors)):\n px = X_pca[:, 0][y_digits == i]\n py = X_pca[:, 1][y_digits == i]\n plt.scatter(px, py, c=colors[i])\n plt.legend(digits.target_names)\n plt.xlabel('First Principal Component')\n plt.ylabel('Second Principal Component')", "At this point, we are ready to perform the PCA transformation. In scikit-learn, PCA is implemented as a transformer object that learns n number of components through the fit method, and can be used on new data to project it onto these components. In scikit-learn, we have various classes that implement different kinds of PCA decompositions. In our case, we will work with the PCA class from the sklearn.decomposition module. The most important parameter we can change is n_components, which allows us to specify the number of features that the obtained instances will have.", "from sklearn.decomposition import PCA\n\nn_components = n_row * n_col # 10\nestimator = PCA(n_components=n_components)\nX_pca = estimator.fit_transform(X_digits)\nplot_pca_scatter() # Note that we only plot the first and second principal component", "To finish, let us look at principal component transformations. We will take the principal components from the estimator by accessing the components attribute. Each of its components is a matrix that is used to transform a vector from the original space to the transformed space. In the scatter we previously plotted, we only took into account the first two components.", "def print_pca_components(images, n_col, n_row):\n plt.figure(figsize=(2. * n_col, 2.26 * n_row))\n for i, comp in enumerate(images):\n plt.subplot(n_row, n_col, i + 1)\n plt.imshow(comp.reshape((8, 8)), interpolation='nearest')\n plt.text(0, -1, str(i + 1) + '-component')\n plt.xticks(())\n plt.yticks(())\n\nprint_pca_components(estimator.components_[:n_components], n_col, n_row)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/mpi-esm-1-2-hr/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: MPI-M\nSource ID: MPI-ESM-1-2-HR\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:17\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-hr', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/patents-public-data
examples/Document_representation_from_BERT.ipynb
apache-2.0
[ "Document representation from BERT\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.", "import collections\nimport math\nimport random\nimport sys\nimport time\nfrom typing import Dict, List, Tuple\nfrom sklearn.metrics import pairwise\n# Use Tensorflow 2.0\nimport tensorflow as tf\nimport numpy as np\n\n# Set BigQuery application credentials\nfrom google.cloud import bigquery\nimport os\nos.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = \"path/to/file.json\"\n\nproject_id = \"your_bq_project_id\"\nbq_client = bigquery.Client(project=project_id)\n\n# You will have to clone the BERT repo\n!test -d bert_repo || git clone https://github.com/google-research/bert bert_repo\nif not 'bert_repo' in sys.path:\n sys.path += ['bert_repo']", "The BERT repo uses Tensorflow 1 and thus a few of the functions have been moved/changed/renamed in Tensorflow 2. In order for the BERT tokenizer to be used, one of the lines in the repo that was just cloned needs to be modified to comply with Tensorflow 2. Line 125 in the BERT tokenization.py file must be changed as follows:\nFrom => with tf.gfile.GFile(vocab_file, \"r\") as reader:\nTo => with tf.io.gfile.GFile(vocab_file, \"r\") as reader:\nOnce that is complete and the file is saved, the tokenization library can be imported.", "import tokenization", "Load BERT", "MAX_SEQ_LENGTH = 512\nMODEL_DIR = 'path/to/model'\nVOCAB = 'path/to/vocab'\n\ntokenizer = tokenization.FullTokenizer(VOCAB, do_lower_case=True)\n\nmodel = tf.compat.v2.saved_model.load(export_dir=MODEL_DIR, tags=['serve'])\nmodel = model.signatures['serving_default']\n\n# Mean pooling layer for combining\npooling = tf.keras.layers.GlobalAveragePooling1D()", "Get a couple of Patents\nHere we do a simple query from the BigQuery patents data to collect the claims for a sample set of patents.", "# Put your publications here.\ntest_pubs = (\n 'US-8000000-B2', 'US-2007186831-A1', 'US-2009030261-A1', 'US-10722718-B2'\n)\n\njs = r\"\"\"\n // Regex to find the separations of the claims data\n var pattern = new RegExp(/[.][\\\\s]+[0-9]+[\\\\s]*[.]/, 'g');\n if (pattern.test(text)) {\n return text.split(pattern);\n }\n\"\"\"\n\nquery = r'''\n #standardSQL\n CREATE TEMPORARY FUNCTION breakout_claims(text STRING) RETURNS ARRAY<STRING> \n LANGUAGE js AS \"\"\"\n {}\n \"\"\"; \n\n SELECT \n pubs.publication_number, \n title.text as title, \n breakout_claims(claims.text) as claims\n FROM `patents-public-data.patents.publications` as pubs,\n UNNEST(claims_localized) as claims,\n UNNEST(title_localized) as title\n WHERE\n publication_number in {}\n'''.format(js, test_pubs)\n\ndf = bq_client.query(query).to_dataframe()\n\ndf.head()\n\ndef get_bert_token_input(texts):\n input_ids = []\n input_mask = []\n segment_ids = []\n\n for text in texts:\n tokens = tokenizer.tokenize(text)\n if len(tokens) > MAX_SEQ_LENGTH - 2:\n tokens = tokens[0:(MAX_SEQ_LENGTH - 2)]\n tokens = ['[CLS]'] + tokens + ['[SEP]']\n\n\n ids = tokenizer.convert_tokens_to_ids(tokens)\n token_pad = MAX_SEQ_LENGTH - len(ids)\n input_mask.append([1] * len(ids) + [0] * token_pad)\n input_ids.append(ids + [0] * token_pad)\n segment_ids.append([0] * MAX_SEQ_LENGTH)\n \n return {\n 'segment_ids': tf.convert_to_tensor(segment_ids, dtype=tf.int64),\n 'input_mask': tf.convert_to_tensor(input_mask, dtype=tf.int64),\n 'input_ids': tf.convert_to_tensor(input_ids, dtype=tf.int64),\n 'mlm_positions': tf.convert_to_tensor([], dtype=tf.int64)\n }\n\ndocs_embeddings = []\nfor _, row in df.iterrows():\n inputs = get_bert_token_input(row['claims'])\n response = model(**inputs)\n avg_embeddings = pooling(\n tf.reshape(response['encoder_layer'], shape=[1, -1, 1024]))\n docs_embeddings.append(avg_embeddings.numpy()[0])\n\npairwise.cosine_similarity(docs_embeddings)\n\ndocs_embeddings[0].shape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/starthinker
colabs/dbm_to_sheets.ipynb
apache-2.0
[ "DV360 Report To Sheets\nMove existing DV360 report into a Sheets tab.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.", "!pip install git+https://github.com/google/starthinker\n", "2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.", "from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n", "3. Enter DV360 Report To Sheets Recipe Parameters\n\nSpecify either report name or report id to move a report.\nThe most recent valid file will be moved to the sheet.\nModify the values below for your use case, can be done multiple times, then click play.", "FIELDS = {\n 'auth_read':'user', # Credentials used for reading data.\n 'report_id':'', # DV360 report ID given in UI, not needed if name used.\n 'report_name':'', # Name of report, not needed if ID used.\n 'sheet':'', # Full URL to sheet being written to.\n 'tab':'', # Existing tab in sheet to write to.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n", "4. Execute DV360 Report To Sheets\nThis does NOT need to be modified unless you are changing the recipe, click play.", "from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'dbm':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},\n 'report':{\n 'report_id':{'field':{'name':'report_id','kind':'integer','order':1,'default':'','description':'DV360 report ID given in UI, not needed if name used.'}},\n 'name':{'field':{'name':'report_name','kind':'string','order':2,'default':'','description':'Name of report, not needed if ID used.'}}\n },\n 'out':{\n 'sheets':{\n 'sheet':{'field':{'name':'sheet','kind':'string','order':3,'default':'','description':'Full URL to sheet being written to.'}},\n 'tab':{'field':{'name':'tab','kind':'string','order':4,'default':'','description':'Existing tab in sheet to write to.'}},\n 'range':'A1'\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdhp-docs/python_notebooks
nb_dev_python/python_scipy_integrate.ipynb
mit
[ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nimport scipy.integrate", "https://docs.scipy.org/doc/scipy-1.3.0/reference/tutorial/integrate.html\nhttps://docs.scipy.org/doc/scipy-1.3.0/reference/integrate.html\n\nIntegrating functions, given callable object (scipy.integrate.quad)\nSee:\n- https://docs.scipy.org/doc/scipy-1.3.0/reference/tutorial/integrate.html#general-integration-quad\n- https://docs.scipy.org/doc/scipy-1.3.0/reference/generated/scipy.integrate.quad.html#scipy.integrate.quad\nExample:\n$$I = \\int_{0}^{3} x^2 dx = \\frac{1}{3} 3^3 = 9$$", "f = lambda x: np.power(x, 2)\n\nresult = scipy.integrate.quad(f, 0, 3)\nresult", "The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error.\nIntegrating functions, given fixed samples\nhttps://docs.scipy.org/doc/scipy-1.3.0/reference/tutorial/integrate.html#integrating-using-samples", "x = np.linspace(0., 3., 100)\ny = f(x)\n\nplt.plot(x, y);", "In case of arbitrary spaced samples, the two functions trapz and simps are available.", "result = scipy.integrate.simps(y, x)\nresult\n\nresult = scipy.integrate.trapz(y, x)\nresult" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ThunderShiviah/code_guild
wk9/notebooks/Ch.2-Extending our functional test using the unittest module.ipynb
mit
[ "Ch. 2: Extending our functional test using the unittest module\nUsing a functional test to scope out a miniumum viable app\nWe'll use selenium to simulate a user visiting our website in a real web browser. We call our tests with selenium functional tests because they let us see how the app functions from the user's point of view.\nFunctional tests tend to track what we might call the User Story, i.e. how a user might work with a particular feature and how the app should respond to them.\nFunctional Test == Acceptance Test == End-to-End Test\nMinimum viable app\nWhat is the simplest thing that we can build that is still useful?", "%cd ../examples/superlists/\n%ls\n\n%%writefile functional_tests.py\n\nfrom selenium import webdriver\n\nbrowser = webdriver.Firefox()\n\n# Edith has heard about a cool new online to-do app. She goes\n# to check out its homepage\nbrowser.get('http://localhost:8000')\n\n# She notices the page title and header mention to-do lists\nassert 'To-Do' in browser.title\n\n# She is invited to enter a to-do item straight away\n\n# She types \"Buy peacock feathers\" into a text box (Edith's hobby\n# is tying fly-fishing lures)\n\n# When she hits enter, the page updates, and now the page lists\n# \"1: Buy peacock feathers\" as an item in a to-do list\n\n# There is still a text box inviting her to add another item. She\n# enters \"Use peacock feathers to make a fly\" (Edith is very methodical)\n\n# The page updates again, and now shows both items on her list\n\n# Edith wonders whether the site will remember her list. Then she sees\n# that the site has generated a unique URL for her -- there is some\n# explanatory text to that effect.\n\n# She visits that URL - her to-do list is still there.\n\n# Satisfied, she goes back to sleep\n\nbrowser.quit()\n\n", "Notice that I've updated the assert to include the word \"To-Do\" instead of \"Django\". Now our test should fail. Let's check that it fails.", "# First start up the server:\n#!python3 manage.py runserver\n\n# Run test\n!python3 functional_tests.py", "We got what was called an expected fail which is what we wanted!\nPython Standard Library's unittest Module\nThere are a couple of little annoyances we should probably deal with. Firstly, the message \"AssertionError\" isn’t very helpful—it would be nice if the test told us what it actually found as the browser title. Also, it’s left a Firefox window hanging around the desktop, it would be nice if this would clear up for us automatically.\nOne option would be to use the second parameter to the assert keyword, something like:\npython\nassert 'To-Do' in browser.title, \"Browser title was \" + browser.title\nAnd we could also use a try/finally to clean up the old Firefox window. But these sorts of problems are quite common in testing, and there are some ready-made solutions for us in the standard library’s unittest module. Let’s use that! In functional_tests.py:", "%%writefile functional_tests.py\n\nfrom selenium import webdriver\nimport unittest\n\nclass NewVisitorTest(unittest.TestCase): #1\n\n def setUp(self): #2\n self.browser = webdriver.Firefox()\n self.browser.implicitly_wait(3) # Wait three seconds before trying anything.\n\n def tearDown(self): #3\n self.browser.quit()\n\n def test_can_start_a_list_and_retrieve_it_later(self): #4\n # Edith has heard about a cool new online to-do app. She goes\n # to check out its homepage\n self.browser.get('http://localhost:8000')\n\n # She notices the page title and header mention to-do lists\n self.assertIn('To-Do', self.browser.title) #5\n self.fail('Finish the test!') #6\n\n # She is invited to enter a to-do item straight away\n # [...rest of comments as before]\n\nif __name__ == '__main__': #7\n unittest.main(warnings='ignore') #8", "Some things to notice about our new test file:\n\n\nTests are organised into classes, which inherit from unittest.TestCase.\n\n\nand\n\n\nsetUp and tearDown are special methods which get run before and after each test. I’m using them to start and stop our browser—note that they’re a bit like a try/except, in that tearDown will run even if there’s an error during the test itself.[4] No more Firefox windows left lying around!\n\n\nThe main body of the test is in a method called test_can_start_a_list_and_retrieve_it_later. Any method whose name starts with test is a test method, and will be run by the test runner. You can have more than one test_ method per class. Nice descriptive names for our test methods are a good idea too.\n\n\nWe use self.assertIn instead of just assert to make our test assertions. unittest provides lots of helper functions like this to make test assertions, like assertEqual, assertTrue, assertFalse, and so on. You can find more in the unittest documentation.\n\n\nself.fail just fails no matter what, producing the error message given. I’m using it as a reminder to finish the test.\n\n\nFinally, we have the if name == 'main' clause (if you’ve not seen it before, that’s how a Python script checks if it’s been executed from the command line, rather than just imported by another script). We call unittest.main(), which launches the unittest test runner, which will automatically find test classes and methods in the file and run them.\n\n\nwarnings='ignore' suppresses a superfluous ResourceWarning which was being emitted at the time of writing. It may have disappeared by the time you read this; feel free to try removing it!\n\n\nRunning our new test", "!python3 functional_tests.py", "We got the same expected failure but now it looks nice!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jingr1/SelfDrivingCar
matrix/kalman_filter_demo.ipynb
mit
[ "Kalman Filter and your Matrix Class\nOnce you have a working matrix class, you can use the class to run a Kalman filter! \nYou will need to put your matrix class into the workspace:\n* Click above on the \"JUPYTER\" logo. \n* Then open the matrix.py file, and copy in your code there. \n* Make sure to save the matrix.py file. \n* Then click again on the \"JUPYTER\" logo and open this file again.\nYou can also download this file kalman_filter_demo.ipynb and run the demo locally on your own computer.\nOnce you have our matrix class loaded, you are ready to go through the demo. Read through this file and run each cell one by one. You do not need to write any code in this Ipython notebook.\nThe demonstration has two different sections. The first section creates simulated data. The second section runs a Kalman filter on the data and visualizes the results.\nKalman Filters - Why are they useful?\nKalman filters are really good at taking noisy sensor data and smoothing out the data to make more accurate predictions. For autonomous vehicles, Kalman filters can be used in object tracking. \nKalman Filters and Sensors\nObject tracking is often done with radar and lidar sensors placed around the vehicle. A radar sensor can directly measure the distance and velocity of objects moving around the vehicle. A lidar sensor only measures distance.\nPut aside a Kalman filter for a minute and think about how you could use lidar data to track an object. Let's say there is a bicyclist riding around in front of you. You send out a lidar signal and receive the signal back. The lidar sensor tells you that the bicycle is 10 meters directly ahead of you but gives you no velocity information.\nBy the time your lidar device sends out another signal, maybe 0.05 seconds will have passed. But during those 0.05 seconds, your vehicle still needs to keep track of the bicycle. So your vehicle will predict where it thinks the bycicle will be. But your vehicle has no bicycle velocity information.\nAfter 0.05 seconds, the lidar device sends out and receives another signal. This time, the bicycle is 9.95 meters ahead of you. Now you know that the bicycle is traveling -1 meter per second towards you. For the next -.05 seconds, your vehicle will assume the bicycle is traveling -1 m/s towards you. Then another lidar signal goes out and comes back, and you can update the position and velocity again.\nSensor Noise\nUnfortunately, lidar and radar signals are noisy. In other words, they are somewhat inacurrate. A Kalman filter helps to smooth out the noise so that you get a better fix on the bicycle's true position and velocity. \nA Kalman filter does this by weighing the uncertainty in your belief about the location versus the uncertainty in the lidar or radar measurement. If your belief is very uncertain, the Kalman filter gives more weight to the sensor. If the sensor measurement has more uncertainty, your belief about the location gets more weight than the sensor mearuement. \nPart 1 - Generate Data\nThe next few cells in the Ipython notebook generate simulation data. Imagine you are in a vehicle and tracking another car in front of you. All of the data you track will be relative to your position. \nIn this simulation, you are on a one-dimensional road where the car you are tracking can only move forwards or backwards. For this simulated data, the tracked vehicle starts 5 meters ahead of you traveling at 100 km/h. The vehicle is accelerating at -10 m/s^2. In other words, the vehicle is slowing down. \nOnce the vehicle stops at 0 km/h, the car stays idle for 5 seconds. Then the vehicle continues accelerating towards you until the vehicle is traveling at -10 km/h. The vehicle travels at -10 km/h for 5 seconds. Don't worry too much about the trajectory of the other vehicle; this will be displayed for you in a visualization\nYou have a single lidar sensor on your vehicle that is tracking the other car. The lidar sensor takes a measurment once every 50 milliseconds.\nRun the code cell below to start the simulator and collect data about the tracked car. Noticed the line \nimport matrix as m, which imports your matrix code from the final project. You will not see any output yet when running this cell.", "%matplotlib inline\n\nimport pandas as pd\nimport math\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport datagenerator\nimport matrix as m\n\nmatplotlib.rcParams.update({'font.size': 16})\n\n# data_groundtruth() has the following inputs:\n# Generates Data\n# Input variables are:\n# initial position meters\n# initial velocity km/h\n# final velocity (should be a negative number) km/h\n# acceleration (should be a negative number) m/s^2\n# how long the vehicle should idle \n# how long the vehicle should drive in reverse at constant velocity\n# time between lidar measurements in milliseconds\n\ntime_groundtruth, distance_groundtruth, velocity_groundtruth, acceleration_groundtruth = datagenerator.generate_data(5, 100, -10, -10,\n 5000, 5000, 50)\ndata_groundtruth = pd.DataFrame(\n {'time': time_groundtruth,\n 'distance': distance_groundtruth,\n 'velocity': velocity_groundtruth,\n 'acceleration': acceleration_groundtruth\n })", "Visualizing the Tracked Object Distance\nThe next cell visualizes the simulating data. The first visualization shows the object distance over time. You can see that the car is moving forward although decelerating. Then the car stops for 5 seconds and then drives backwards for 5 seconds.", "ax1 = data_groundtruth.plot(kind='line', x='time', y='distance', title='Object Distance Versus Time')\nax1.set(xlabel='time (milliseconds)', ylabel='distance (meters)')", "Visualizing Velocity Over Time\nThe next cell outputs a visualization of the velocity over time. The tracked car starts at 100 km/h and decelerates to 0 km/h. Then the car idles and eventually starts to decelerate again until reaching -10 km/h.", "ax2 = data_groundtruth.plot(kind='line', x='time', y='velocity', title='Object Velocity Versus Time')\nax2.set(xlabel='time (milliseconds)', ylabel='velocity (km/h)')", "Visualizing Acceleration Over Time\nThis cell visualizes the tracked cars acceleration. The vehicle declerates at 10 m/s^2. Then the vehicle stops for 5 seconds and briefly accelerates again.", "data_groundtruth['acceleration'] = data_groundtruth['acceleration'] * 1000 / math.pow(60 * 60, 2)\nax3 = data_groundtruth.plot(kind='line', x='time', y='acceleration', title='Object Acceleration Versus Time')\nax3.set(xlabel='time (milliseconds)', ylabel='acceleration (m/s^2)')", "Simulate Lidar Data\nThe following code cell creates simulated lidar data. Lidar data is noisy, so the simulator takes ground truth measurements every 0.05 seconds and then adds random noise.", "# make lidar measurements\nlidar_standard_deviation = 0.15\nlidar_measurements = datagenerator.generate_lidar(distance_groundtruth, lidar_standard_deviation)\nlidar_time = time_groundtruth", "Visualize Lidar Meausrements\nRun the following cell to visualize the lidar measurements versus the ground truth. The ground truth is shown in red, and you can see that the lidar measurements are a bit noisy.", "data_lidar = pd.DataFrame(\n {'time': time_groundtruth,\n 'distance': distance_groundtruth,\n 'lidar': lidar_measurements\n })\n\nmatplotlib.rcParams.update({'font.size': 22})\n\nax4 = data_lidar.plot(kind='line', x='time', y ='distance', label='ground truth', figsize=(20, 15), alpha=0.8,\n title = 'Lidar Measurements Versus Ground Truth', color='red')\nax5 = data_lidar.plot(kind='scatter', x ='time', y ='lidar', label='lidar measurements', ax=ax4, alpha=0.6, color='g')\nax5.set(xlabel='time (milliseconds)', ylabel='distance (meters)')\nplt.show()", "Part 2 - Using a Kalman Filter\nThe next part of the demonstration will use your matrix class to run a Kalman filter. This first cell initializes variables and defines a few functions.\nThe following cell runs the Kalman filter using the lidar data.", "# Kalman Filter Initialization\n\ninitial_distance = 0\ninitial_velocity = 0\n\nx_initial = m.Matrix([[initial_distance], [initial_velocity * 1e-3 / (60 * 60)]])\nP_initial = m.Matrix([[5, 0],[0, 5]])\n\nacceleration_variance = 50\nlidar_variance = math.pow(lidar_standard_deviation, 2)\n\nH = m.Matrix([[1, 0]])\nR = m.Matrix([[lidar_variance]])\nI = m.identity(2)\n\ndef F_matrix(delta_t):\n return m.Matrix([[1, delta_t], [0, 1]])\n\ndef Q_matrix(delta_t, variance):\n t4 = math.pow(delta_t, 4)\n t3 = math.pow(delta_t, 3)\n t2 = math.pow(delta_t, 2)\n \n return variance * m.Matrix([[(1/4)*t4, (1/2)*t3], [(1/2)*t3, t2]])", "Run the Kalman filter\nThe next code cell runs the Kalman filter. In this demonstration, the prediction step starts with the second lidar measurement. When the first lidar signal arrives, there is no previous lidar measurement with which to calculate velocity. In other words, the Kalman filter predicts where the vehicle is going to be, but it can't make a prediction until time has passed between the first and second lidar reading. \nThe Kalman filter has two steps: a prediction step and an update step. In the prediction step, the filter uses a motion model to figure out where the object has traveled in between sensor measurements. The update step uses the sensor measurement to adjust the belief about where the object is.", "# Kalman Filter Implementation\n\nx = x_initial\nP = P_initial\n\nx_result = []\ntime_result = []\nv_result = []\n\n\nfor i in range(len(lidar_measurements) - 1):\n \n # calculate time that has passed between lidar measurements\n delta_t = (lidar_time[i + 1] - lidar_time[i]) / 1000.0\n\n # Prediction Step - estimates how far the object traveled during the time interval\n F = F_matrix(delta_t)\n Q = Q_matrix(delta_t, acceleration_variance)\n \n x_prime = F * x\n P_prime = F * P * F.T() + Q\n \n # Measurement Update Step - updates belief based on lidar measurement\n y = m.Matrix([[lidar_measurements[i + 1]]]) - H * x_prime\n S = H * P_prime * H.T() + R\n K = P_prime * H.T() * S.inverse()\n x = x_prime + K * y\n P = (I - K * H) * P_prime\n\n # Store distance and velocity belief and current time\n x_result.append(x[0][0])\n v_result.append(3600.0/1000 * x[1][0])\n time_result.append(lidar_time[i+1])\n \nresult = pd.DataFrame(\n {'time': time_result,\n 'distance': x_result,\n 'velocity': v_result\n })", "Visualize the Results\nThe following code cell outputs a visualization of the Kalman filter. The chart contains ground turth, the lidar measurements, and the Kalman filter belief. Notice that the Kalman filter tends to smooth out the information obtained from the lidar measurement.\nIt turns out that using multiple sensors like radar and lidar at the same time, will give even better results. Using more than one type of sensor at once is called sensor fusion, which you will learn about in the Self-Driving Car Engineer Nanodegree", "ax6 = data_lidar.plot(kind='line', x='time', y ='distance', label='ground truth', figsize=(22, 18), alpha=.3, title='Lidar versus Kalman Filter versus Ground Truth')\nax7 = data_lidar.plot(kind='scatter', x ='time', y ='lidar', label='lidar sensor', ax=ax6)\nax8 = result.plot(kind='scatter', x = 'time', y = 'distance', label='kalman', ax=ax7, color='r')\nax8.set(xlabel='time (milliseconds)', ylabel='distance (meters)')\nplt.show()", "Visualize the Velocity\nOne of the most interesting benefits of Kalman filters is that they can give you insights into variables that you\ncannot directly measured. Although lidar does not directly give velocity information, the Kalman filter can infer velocity from the lidar measurements.\nThis visualization shows the Kalman filter velocity estimation versus the ground truth. The motion model used in this Kalman filter is relatively simple; it assumes velocity is constant and that acceleration a random noise. You can see that this motion model might be too simplistic because the Kalman filter has trouble predicting velocity as the object decelerates.", "ax1 = data_groundtruth.plot(kind='line', x='time', y ='velocity', label='ground truth', figsize=(22, 18), alpha=.8, title='Kalman Filter versus Ground Truth Velocity')\nax2 = result.plot(kind='scatter', x = 'time', y = 'velocity', label='kalman', ax=ax1, color='r')\nax2.set(xlabel='time (milliseconds)', ylabel='velocity (km/h)')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tylerwmarrs/billboard-hot-100-lyric-analysis
notebooks/exploratory/02-raw-lyric-analysis.ipynb
mit
[ "%matplotlib inline\n\nimport os\nimport sys\nnb_dir = os.path.split(os.getcwd())[0]\nproject_dir = os.path.join(nb_dir, os.pardir)\n\nif project_dir not in sys.path:\n sys.path.append(project_dir)\n\nfrom src import webscrapers\nfrom src import corpus", "Scrape swear word list\nWe scrape swear words from the web from the site:\nhttp://www.noswearing.com/\nIt is a community driven list of swear words.", "import string\nimport os\nimport requests\nfrom fake_useragent import UserAgent\nfrom lxml import html\n\ndef requests_get(url): \n ua = UserAgent().random \n return requests.get(url, headers={'User-Agent': ua})\n\ndef get_swear_words(save_file='swear-words.txt'): \n \"\"\"\n Scrapes a comprehensive list of swear words from noswearing.com\n \"\"\"\n words = ['niggas']\n if os.path.isfile(save_file):\n with open(save_file, 'rt') as f:\n for line in f:\n words.append(line.strip())\n \n return words\n \n base_url = 'http://www.noswearing.com/dictionary/'\n letters = '1' + string.ascii_lowercase\n \n for letter in letters:\n full_url = base_url + letter\n result = requests_get(full_url)\n tree = html.fromstring(result.text)\n search = tree.xpath(\"//td[@valign='top']/a[@name and string-length(@name) != 0]\")\n \n if search is None:\n continue\n \n for result in search:\n words.append(result.get('name').lower())\n \n with open(save_file, 'wt') as f:\n for word in words:\n f.write(word)\n f.write('\\n')\n \n return words\n\nprint(get_swear_words())\n", "Testing TextBlob\nI don't really like TextBlob as it tries to be \"nice\", but lacks a lot of basic functionality.\n\nStop words not included\nTokenizer is pretty meh.\nNo built in way to obtain word frequency", "import os\nimport operator\n\nimport pandas as pd\nfrom textblob import TextBlob, WordList\nfrom nltk.corpus import stopwords\n\ndef get_data_paths():\n dir_path = os.path.dirname(os.path.realpath('.'))\n data_dir = os.path.join(dir_path, 'billboard-hot-100-data')\n dirs = [os.path.join(data_dir, d, 'songs.csv') for d in os.listdir(data_dir) \n if os.path.isdir(os.path.join(data_dir, d))]\n \n return dirs\n\ndef lyric_file_to_text_blob(row):\n \"\"\"\n Transform lyrics column to TextBlob instances.\n \"\"\"\n return TextBlob(row['lyrics'])\n\ndef remove_stop_words(word_list):\n wl = WordList([])\n \n stop_words = stopwords.words('english')\n for word in word_list:\n if word.lower() not in stop_words:\n wl.append(word)\n \n return wl\n\ndef word_freq(words, sort='desc'):\n \"\"\"\n Returns frequency table for all words provided in the list.\n \"\"\"\n \n reverse = sort == 'desc'\n \n freq = {}\n for word in words:\n if word in freq:\n freq[word] = freq[word] + 1\n else:\n freq[word] = 1\n \n return sorted(freq.items(), key=operator.itemgetter(1), reverse=reverse)\n\ndata_paths = corpus.raw_data_dirs()\nsongs = corpus.load_songs(data_paths[0])\n\nsongs = pd.DataFrame.from_dict(songs)\nsongs[\"lyrics\"] = songs.apply(lyric_file_to_text_blob, axis=1)\n\nall_words = WordList([])\n\nfor i, row in songs.iterrows():\n all_words.extend(row['lyrics'].words)\n\ncleaned_all_words = remove_stop_words(all_words)\ncleaned_all_words = pd.DataFrame(word_freq(cleaned_all_words.lower()), columns=['word', 'frequency'])\ncleaned_all_words\n\nimport pandas as pd\nimport nltk\n\ndef remove_extra_junk(word_list):\n words = []\n remove = [\",\", \"n't\", \"'m\", \")\", \"(\", \"'s\", \"'\", \"]\", \"[\"]\n \n for word in word_list:\n if word not in remove:\n words.append(word)\n \n return words\n \n \ndata_paths = corpus.raw_data_dirs()\nsongs = corpus.load_songs(data_paths[0])\nsongs = pd.DataFrame.from_dict(songs)\n\nall_words = []\n\nfor i, row in songs.iterrows():\n all_words.extend(nltk.tokenize.word_tokenize(row['lyrics']))\n\ncleaned_all_words = [w.lower() for w in remove_extra_junk(remove_stop_words(all_words))]\nfreq_dist = nltk.FreqDist(cleaned_all_words)\n\nfreq_dist.plot(50)\nfreq_dist.most_common(100)\n#cleaned_all_words = pd.DataFrame(word_freq(cleaned_all_words), columns=['word', 'frequency'])\n#cleaned_all_words", "Repetitive songs skewing data?\nSome songs may be super reptitive. Lets look at a couple of songs that have the word in the title. These songs probably repeat the title a decent amount in their song. Hence treating all lyrics as one group of text less reliable in analyzing frequency.\nTo simplify this process, we can look at only single word titles. This will at least give us a general idea if the data could be skewed by a single song or not.", "for i, song in songs.iterrows():\n title = song['title']\n title_words = title.split(' ')\n \n if len(title_words) > 1:\n continue\n \n lyrics = song['lyrics']\n words = nltk.tokenize.word_tokenize(lyrics)\n clean_words = [w.lower() for w in remove_extra_junk(remove_stop_words(words))]\n \n dist = nltk.FreqDist(clean_words)\n freq = dist.freq(title_words[0].lower())\n \n if freq > .1:\n print(song['artist'], title)", "Seems pretty reptitive\nThere are a handful of single word song titles that repeat the title within the song at least 10% of the time. This gives us a general idea that there is most likely a skew to the data. I think it is safe to assume that if a single word is repeated many times, the song is most likely reptitive.\nLets look at the song \"water\" by Ugly God to confirm.", "song_title_to_analyze = 'Water'\n\nlyrics = songs['lyrics'].where(songs['title'] == song_title_to_analyze, '').max()\nprint(lyrics)\nwords = nltk.tokenize.word_tokenize(lyrics)\nclean_words = [w.lower() for w in remove_extra_junk(remove_stop_words(words))]\nwater_dist = nltk.FreqDist(clean_words)\nwater_dist.plot(25)\n\nwater_dist.freq(song_title_to_analyze.lower())", "Looking at swear word distribution\nLet's look at the distribution of swear words...", "sws = []\n\nfor sw in set(corpus.swear_words()):\n sws.append({'word': sw,\n 'dist': freq_dist.freq(sw)})\n \nsw_df = pd.DataFrame.from_dict(sws)\nsw_df.nlargest(10, 'dist').plot(x='word', kind='bar')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karissa/pyeda
ipynb/SymPy_Comparison.ipynb
bsd-2-clause
[ "Introduction\nIn this notebook, we will demonstrate some of the differences between SymPy's logic module, and PyEDA's logic expressions.", "import sympy\n\nimport pyeda.boolalg.expr\nimport pyeda.boolalg.bfarray", "Create Variables\nThe xs array is a tuple of SymPy symbolic variables,\nand the ys array is a PyEDA function array.", "xs = sympy.symbols(\",\".join(\"x%d\" % i for i in range(64)))\n\nys = pyeda.boolalg.bfarray.exprvars('y', 64)", "Basic Boolean Functions\nCreate a SymPy XOR function:", "f = sympy.Xor(*xs[:4])", "Create a PyEDA XOR function:", "g = pyeda.boolalg.expr.Xor(*ys[:4])", "SymPy atoms method is similar to PyEDA's support property:", "f.atoms()\n\ng.support", "SymPy's subs method is similar to PyEDA's restrict method:", "f.subs({xs[0]: 0, xs[1]: 1})\n\ng.restrict({ys[0]: 0, ys[1]: 1})", "Conversion to NNF\nConversion to negation normal form is also similar. One difference is that SymPy inverts the variables by applying a Not operator, but PyEDA converts inverted variables to complements (a negative literal).", "sympy.to_nnf(f)\n\ntype(sympy.Not(xs[0]))\n\ng.to_nnf()\n\ntype(~ys[0])", "Conversion to DNF\nConversion to disjunctive normal form, on the other hand, has some differences. With only four input variables, SymPy takes a couple seconds to do the calculation. The output is large, with unsimplified values and redundant clauses.", "sympy.to_dnf(f)", "PyEDA's DNF conversion is minimal:", "g.to_dnf()", "It's a little hard to do an apples-to-apples comparison, because 1) SymPy is pure Python and 2) the algorithms are probably different.\nThe simplify_logic function actually looks better for comparison:", "from sympy.logic import simplify_logic\n\nsimplify_logic(f)\n\nsimplify_logic(f)", "Running this experiment from N=2 to N=6 shows that PyEDA's runtime grows significantly slower.", "import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nN = 5\n\nsympy_times = (.000485, .000957, .00202, .00426, .0103)\npyeda_times = (.0000609, .000104, .000147, .00027, .000451)\n\nind = np.arange(N) # the x locations for the groups\nwidth = 0.35 # the width of the bars\n\nfig, ax = plt.subplots()\n\nrects1 = ax.bar(ind, sympy_times, width, color='r')\nrects2 = ax.bar(ind + width, pyeda_times, width, color='y')\n\n# add some text for labels, title and axes ticks\nax.set_ylabel('Time (s)')\nax.set_title('SymPy vs. PyEDA: Xor(x[0], x[1], ..., x[n-1]) to DNF')\nax.set_xticks(ind + width)\nax.set_xticklabels(('N=2', 'N=3', 'N=4', 'N=5', 'N=6'))\n\nax.legend((rects1[0], rects2[0]), ('SymPy', 'PyEDA'))\n\nplt.show()", "Going a bit further, things get worse.\nThese numbers are from my laptop:\n| N | sympy | pyeda | ratio |\n|----|----------|----------|--------|\n| 2 | .000485 | .0000609 | 7.96 |\n| 3 | .000957 | .000104 | 9.20 |\n| 4 | .00202 | .000147 | 13.74 |\n| 5 | .00426 | .00027 | 15.78 |\n| 6 | .0103 | .000451 | 22.84 |\n| 7 | .0231 | .000761 | 30.35 |\n| 8 | .0623 | .00144 | 43.26 |\n| 9 | .162 | .00389 | 41.65 |\n| 10 | .565 | .00477 | 118.45 |\n| 11 | 1.78 | .012 | 148.33 |\n| 12 | 6.46 | .0309 | 209.06 |\nSimplification\nSymPy supports some obvious simplifications, but PyEDA supports more. Here are a few examples.", "sympy.Equivalent(xs[0], xs[1], 0)\n\npyeda.boolalg.expr.Equal(ys[0], ys[1], 0)\n\nsympy.ITE(xs[0], 0, xs[1])\n\npyeda.boolalg.expr.ITE(ys[0], 0, ys[1])\n\nsympy.Or(xs[0], sympy.Or(xs[1], xs[2]))\n\npyeda.boolalg.expr.Or(ys[0], pyeda.boolalg.expr.Or(ys[1], ys[2]))\n\nsympy.Xor(xs[0], sympy.Not(sympy.Xor(xs[1], xs[2])))\n\npyeda.boolalg.expr.Xor(ys[0], pyeda.boolalg.expr.Xnor(ys[1], ys[2]))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex SDK: Submit a HyperParameter tuning training job with TensorFlow\nInstallation\nInstall the latest (preview) version of Vertex SDK.", "! pip3 install -U google-cloud-aiplatform --user", "Install the Google cloud-storage library as well.", "! pip3 install google-cloud-storage", "Restart the Kernel\nOnce you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"AUTORUN\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in Google Cloud Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your GCP account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Vertex, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.", "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION gs://$BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al gs://$BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex SDK\nImport the Vertex SDK into our Python environment.", "import os\nimport sys\nimport time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value", "Vertex AI constants\nSetup up the following constants for Vertex AI:\n\nAPI_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.\nAPI_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction.\nPARENT: The Vertex AI location root path for dataset, model and endpoint resources.", "# API Endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex AI location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "Clients\nThe Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).\nYou will use several clients in this tutorial, so set them all up upfront.\n\nDataset Service for managed datasets.\nModel Service for managed models.\nPipeline Service for training.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving. Note: Prediction has a different service endpoint.", "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"model\"] = create_model_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\nclients[\"job\"] = create_job_client()\n\nfor client in clients.items():\n print(client)", "Prepare a trainer script\nPackage assembly", "# Make folder for python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\\ntag_build =\\n\\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\\n# Requires TensorFlow Datasets\\n\\\nsetuptools.setup(\\n\\\n install_requires=[\\n\\\n 'tensorflow_datasets==1.3.0',\\n\\\n ],\\n\\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\\nName: Hyperparameter Tuning - Boston Housing\\n\\\nVersion: 0.0.0\\n\\\nSummary: Demonstration hyperparameter tuning script\\n\\\nHome-page: www.google.com\\n\\\nAuthor: Google\\n\\\nAuthor-email: [email protected]\\n\\\nLicense: Public\\n\\\nDescription: Demo\\n\\\nPlatform: Vertex AI\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py", "Task.py contents", "%%writefile custom/trainer/task.py\n# hyperparameter tuningfor Boston Housing\n \nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nfrom hypertune import HyperTune\nimport numpy as np\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default='/tmp/saved_model', type=str, help='Model dir.')\nparser.add_argument('--lr', dest='lr',\n default=0.001, type=float,\n help='Learning rate.')\nparser.add_argument('--units', dest='units',\n default=64, type=int,\n help='Number of units.')\nparser.add_argument('--epochs', dest='epochs',\n default=20, type=int,\n help='Number of epochs.')\nparser.add_argument('--param-file', dest='param_file',\n default='/tmp/param.txt', type=str,\n help='Output file for parameters')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\n\ndef make_dataset():\n # Scaling Boston Housing data features\n def scale(feature):\n max = np.max(feature)\n feature = (feature / max).astype(np.float)\n return feature, max\n\n (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(\n path=\"boston_housing.npz\", test_split=0.2, seed=113\n )\n params = []\n for _ in range(13):\n x_train[_], max = scale(x_train[_])\n x_test[_], _ = scale(x_test[_])\n params.append(max)\n \n # store the normalization (max) value for each feature\n with tf.io.gfile.GFile(args.param_file, 'w') as f:\n f.write(str(params))\n return (x_train, y_train), (x_test, y_test)\n\n# Build the Keras model\ndef build_and_compile_dnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Dense(args.units, activation='relu', input_shape=(13,)),\n tf.keras.layers.Dense(args.units, activation='relu'),\n tf.keras.layers.Dense(1, activation='linear')\n ])\n model.compile(\n loss='mse',\n optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))\n return model\n\nmodel = build_and_compile_dnn_model()\n\n# Instantiate the HyperTune reporting object\nhpt = HyperTune()\n\n# Reporting callback\nclass HPTCallback(tf.keras.callbacks.Callback):\n\n def on_epoch_end(self, epoch, logs=None):\n global hpt\n hpt.report_hyperparameter_tuning_metric(\n hyperparameter_metric_tag='val_loss',\n metric_value=logs['val_loss'],\n global_step=epoch)\n\n# Train the model\nBATCH_SIZE = 16\n(x_train, y_train), (x_test, y_test) = make_dataset()\nmodel.fit(x_train, y_train, epochs=args.epochs, batch_size=BATCH_SIZE, validation_split=0.1, callbacks=[HPTCallback()])\nmodel.save(args.model_dir)\n", "Store training script on your Cloud Storage bucket", "! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz gs://$BUCKET_NAME/hpt_boston_housing.tar.gz", "Train a model\nprojects.locations.hyperparameterTuningJob.create\nRequest", "JOB_NAME = \"hyperparameter_tuning_\" + TIMESTAMP\n\nWORKER_POOL_SPEC = [\n {\n \"replica_count\": 1,\n \"machine_spec\": {\"machine_type\": \"n1-standard-4\", \"accelerator_count\": 0},\n \"python_package_spec\": {\n \"executor_image_uri\": \"gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest\",\n \"package_uris\": [\"gs://\" + BUCKET_NAME + \"/hpt_boston_housing.tar.gz\"],\n \"python_module\": \"trainer.task\",\n \"args\": [\"--model-dir=\" + \"gs://{}/{}\".format(BUCKET_NAME, JOB_NAME)],\n },\n }\n]\n\nSTUDY_SPEC = {\n \"metrics\": [\n {\"metric_id\": \"val_loss\", \"goal\": aip.StudySpec.MetricSpec.GoalType.MINIMIZE}\n ],\n \"parameters\": [\n {\n \"parameter_id\": \"lr\",\n \"discrete_value_spec\": {\"values\": [0.001, 0.01, 0.1]},\n \"scale_type\": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,\n },\n {\n \"parameter_id\": \"units\",\n \"integer_value_spec\": {\"min_value\": 32, \"max_value\": 256},\n \"scale_type\": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE,\n },\n ],\n \"algorithm\": aip.StudySpec.Algorithm.RANDOM_SEARCH,\n}\n\nhyperparameter_tuning_job = aip.HyperparameterTuningJob(\n display_name=JOB_NAME,\n trial_job_spec={\"worker_pool_specs\": WORKER_POOL_SPEC},\n study_spec=STUDY_SPEC,\n max_trial_count=6,\n parallel_trial_count=1,\n)\n\nprint(\n MessageToJson(\n aip.CreateHyperparameterTuningJobRequest(\n parent=PARENT, hyperparameter_tuning_job=hyperparameter_tuning_job\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"hyperparameterTuningJob\": {\n \"displayName\": \"hyperparameter_tuning_20210226020029\",\n \"studySpec\": {\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"goal\": \"MINIMIZE\"\n }\n ],\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"discreteValueSpec\": {\n \"values\": [\n 0.001,\n 0.01,\n 0.1\n ]\n },\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n },\n {\n \"parameterId\": \"units\",\n \"integerValueSpec\": {\n \"minValue\": \"32\",\n \"maxValue\": \"256\"\n },\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n }\n ],\n \"algorithm\": \"RANDOM_SEARCH\"\n },\n \"maxTrialCount\": 6,\n \"parallelTrialCount\": 1,\n \"trialJobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"replicaCount\": \"1\",\n \"pythonPackageSpec\": {\n \"executorImageUri\": \"gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest\",\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029\"\n ]\n }\n }\n ]\n }\n }\n}\nCall", "request = clients[\"job\"].create_hyperparameter_tuning_job(\n parent=PARENT, hyperparameter_tuning_job=hyperparameter_tuning_job\n)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/hyperparameterTuningJobs/5264408897233354752\",\n \"displayName\": \"hyperparameter_tuning_20210226020029\",\n \"studySpec\": {\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"goal\": \"MINIMIZE\"\n }\n ],\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"discreteValueSpec\": {\n \"values\": [\n 0.001,\n 0.01,\n 0.1\n ]\n },\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n },\n {\n \"parameterId\": \"units\",\n \"integerValueSpec\": {\n \"minValue\": \"32\",\n \"maxValue\": \"256\"\n },\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n }\n ],\n \"algorithm\": \"RANDOM_SEARCH\"\n },\n \"maxTrialCount\": 6,\n \"parallelTrialCount\": 1,\n \"trialJobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"replicaCount\": \"1\",\n \"diskSpec\": {\n \"bootDiskType\": \"pd-ssd\",\n \"bootDiskSizeGb\": 100\n },\n \"pythonPackageSpec\": {\n \"executorImageUri\": \"gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest\",\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029\"\n ]\n }\n }\n ]\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T02:02:02.787187Z\",\n \"updateTime\": \"2021-02-26T02:02:02.787187Z\"\n}", "# The full unique ID for the hyperparameter tuningjob\nhyperparameter_tuning_id = request.name\n# The short numeric ID for the hyperparameter tuningjob\nhyperparameter_tuning_short_id = hyperparameter_tuning_id.split(\"/\")[-1]\n\nprint(hyperparameter_tuning_id)", "projects.locations.hyperparameterTuningJob.get\nCall", "request = clients[\"job\"].get_hyperparameter_tuning_job(name=hyperparameter_tuning_id)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/hyperparameterTuningJobs/5264408897233354752\",\n \"displayName\": \"hyperparameter_tuning_20210226020029\",\n \"studySpec\": {\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"goal\": \"MINIMIZE\"\n }\n ],\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"discreteValueSpec\": {\n \"values\": [\n 0.001,\n 0.01,\n 0.1\n ]\n },\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n },\n {\n \"parameterId\": \"units\",\n \"integerValueSpec\": {\n \"minValue\": \"32\",\n \"maxValue\": \"256\"\n },\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n }\n ],\n \"algorithm\": \"RANDOM_SEARCH\"\n },\n \"maxTrialCount\": 6,\n \"parallelTrialCount\": 1,\n \"trialJobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"replicaCount\": \"1\",\n \"diskSpec\": {\n \"bootDiskType\": \"pd-ssd\",\n \"bootDiskSizeGb\": 100\n },\n \"pythonPackageSpec\": {\n \"executorImageUri\": \"gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest\",\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029\"\n ]\n }\n }\n ]\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T02:02:02.787187Z\",\n \"updateTime\": \"2021-02-26T02:02:02.787187Z\"\n}\nWait for the study to complete", "while True:\n response = clients[\"job\"].get_hyperparameter_tuning_job(\n name=hyperparameter_tuning_id\n )\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Study trials have not completed:\", response.state)\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n break\n else:\n print(\"Study trials have completed:\", response.end_time - response.start_time)\n break\n time.sleep(20)", "Review the results of the study", "best = (None, None, None, 0.0)\nresponse = clients[\"job\"].get_hyperparameter_tuning_job(name=hyperparameter_tuning_id)\nfor trial in response.trials:\n print(MessageToJson(trial.__dict__[\"_pb\"]))\n # Keep track of the best outcome\n try:\n if float(trial.final_measurement.metrics[0].value) > best[3]:\n best = (\n trial.id,\n float(trial.parameters[0].value),\n float(trial.parameters[1].value),\n float(trial.final_measurement.metrics[0].value),\n )\n except:\n pass\n\nprint()\nprint(\"ID\", best[0])\nprint(\"Decay\", best[1])\nprint(\"Learning Rate\", best[2])\nprint(\"Validation Accuracy\", best[3])", "Example output:\n```\n{\n \"id\": \"1\",\n \"state\": \"SUCCEEDED\",\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"value\": 0.1\n },\n {\n \"parameterId\": \"units\",\n \"value\": 80.0\n }\n ],\n \"finalMeasurement\": {\n \"stepCount\": \"19\",\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"value\": 46.61515110294993\n }\n ]\n },\n \"startTime\": \"2021-02-26T02:05:16.935353384Z\",\n \"endTime\": \"2021-02-26T02:12:44Z\"\n}\n{\n \"id\": \"2\",\n \"state\": \"SUCCEEDED\",\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"value\": 0.01\n },\n {\n \"parameterId\": \"units\",\n \"value\": 45.0\n }\n ],\n \"finalMeasurement\": {\n \"stepCount\": \"19\",\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"value\": 32.55313952376203\n }\n ]\n },\n \"startTime\": \"2021-02-26T02:15:31.357856840Z\",\n \"endTime\": \"2021-02-26T02:24:18Z\"\n}\n{\n \"id\": \"3\",\n \"state\": \"SUCCEEDED\",\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"value\": 0.1\n },\n {\n \"parameterId\": \"units\",\n \"value\": 70.0\n }\n ],\n \"finalMeasurement\": {\n \"stepCount\": \"19\",\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"value\": 42.709188321741614\n }\n ]\n },\n \"startTime\": \"2021-02-26T02:26:40.704476222Z\",\n \"endTime\": \"2021-02-26T02:34:21Z\"\n}\n{\n \"id\": \"4\",\n \"state\": \"SUCCEEDED\",\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"value\": 0.01\n },\n {\n \"parameterId\": \"units\",\n \"value\": 173.0\n }\n ],\n \"finalMeasurement\": {\n \"stepCount\": \"17\",\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"value\": 46.12480219399057\n }\n ]\n },\n \"startTime\": \"2021-02-26T02:37:45.275581053Z\",\n \"endTime\": \"2021-02-26T02:51:07Z\"\n}\n{\n \"id\": \"5\",\n \"state\": \"SUCCEEDED\",\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"value\": 0.01\n },\n {\n \"parameterId\": \"units\",\n \"value\": 223.0\n }\n ],\n \"finalMeasurement\": {\n \"stepCount\": \"19\",\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"value\": 24.875632611716664\n }\n ]\n },\n \"startTime\": \"2021-02-26T02:53:32.612612421Z\",\n \"endTime\": \"2021-02-26T02:54:19Z\"\n}\n{\n \"id\": \"6\",\n \"state\": \"SUCCEEDED\",\n \"parameters\": [\n {\n \"parameterId\": \"lr\",\n \"value\": 0.1\n },\n {\n \"parameterId\": \"units\",\n \"value\": 123.0\n }\n ],\n \"finalMeasurement\": {\n \"stepCount\": \"13\",\n \"metrics\": [\n {\n \"metricId\": \"val_loss\",\n \"value\": 43.352300690441595\n }\n ]\n },\n \"startTime\": \"2021-02-26T02:56:47.323707459Z\",\n \"endTime\": \"2021-02-26T03:03:49Z\"\n}\nID 1\nDecay 0.1\nLearning Rate 80.0\nValidation Accuracy 46.61515110294993\n```\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.", "delete_hpt_job = True\ndelete_bucket = True\n\n# Delete the hyperparameter tuningusing the Vertex AI fully qualified identifier for the custome training\ntry:\n if delete_hpt_job:\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hyperparameter_tuning_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r gs://$BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ponderousmad/pyndent
notMNIST_setup.ipynb
mit
[ "notMINST Data Setup\nThis notebook sets up the the notMNIST dataset. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.\nThis notebook is derived from the Udacity Tensorflow Course Assignment 1", "%matplotlib inline\nfrom __future__ import print_function\n\nimport gzip\nimport os\nimport sys\nimport tarfile\nimport urllib.request\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom six.moves import cPickle as pickle\n\nimport outputer", "Download the dataset of characters 'A' to 'J' rendered in various fonts as 28x28 images.\nThere is training set of about 500k images and a test set of about 19000 images.", "url = \"http://yaroslavvb.com/upload/notMNIST/\"\ndata_path = outputer.setup_directory(\"notMNIST\")\n\ndef maybe_download(path, filename, expected_bytes):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n file_path = os.path.join(path, filename)\n if not os.path.exists(file_path):\n file_path, _ = urllib.request.urlretrieve(url + filename, file_path)\n statinfo = os.stat(file_path)\n if statinfo.st_size == expected_bytes:\n print(\"Found\", file_path, \"with correct size.\")\n else:\n raise Exception(\"Error downloading \" + filename)\n return file_path\n\ntrain_filename = maybe_download(data_path, \"notMNIST_large.tar.gz\", 247336696)\ntest_filename = maybe_download(data_path, \"notMNIST_small.tar.gz\", 8458043)", "Extract the dataset from the compressed .tar.gz file.\nThis should give you a set of directories, labelled A through J.", "def extract(filename, root, class_count):\n # remove path and .tar.gz\n dir_name = os.path.splitext(os.path.splitext(os.path.basename(filename))[0])[0]\n path = os.path.join(root, dir_name)\n print(\"Extracting\", filename, \"to\", path)\n tar = tarfile.open(filename)\n tar.extractall(path=root)\n tar.close()\n data_folders = [os.path.join(path, d) for d in sorted(os.listdir(path))]\n if len(data_folders) != class_count:\n raise Exception(\"Expected %d folders, one per class. Found %d instead.\" %\n (class_count, len(data_folders)))\n print(data_folders)\n return data_folders\n\ntrain_folders = []\ntest_folders = []\n\nfor name in os.listdir(data_path):\n path = os.path.join(data_path, name)\n target = None\n print(\"Checking\", path)\n if path.endswith(\"_small\"):\n target = test_folders\n elif path.endswith(\"_large\"):\n target = train_folders\n if target is not None:\n target.extend([os.path.join(path, name) for name in os.listdir(path)])\n print(\"Found\", target)\n\nexpected_classes = 10\n\nif len(train_folders) < expected_classes:\n train_folders = extract(train_filename, data_path, expected_classes)\n\nif len(test_folders) < expected_classes:\n test_folders = extract(test_filename, data_path, expected_classes)", "Inspect Data\nVerify that the images contain rendered glyphs.", "Image(filename=\"notMNIST/notMNIST_small/A/MDEtMDEtMDAudHRm.png\")\n\nImage(filename=\"notMNIST/notMNIST_large/A/a2F6b28udHRm.png\")\n\nImage(filename=\"notMNIST/notMNIST_large/C/ZXVyb2Z1cmVuY2UgaXRhbGljLnR0Zg==.png\")\n\n# This I is all white\nImage(filename=\"notMNIST/notMNIST_small/I/SVRDIEZyYW5rbGluIEdvdGhpYyBEZW1pLnBmYg==.png\")", "Convert the data into an array of normalized grayscale floating point images, and an array of classification labels.\nUnreadable images are skipped.", "\ndef normalize_separator(path):\n return path.replace(\"\\\\\", \"/\")\n\ndef load(data_folders, set_id, min_count, max_count):\n # Create arrays large enough for maximum expected data.\n dataset = np.ndarray(shape=(max_count, image_size, image_size), dtype=np.float32)\n labels = np.ndarray(shape=(max_count), dtype=np.int32)\n label_index = 0\n image_index = 0\n \n solid_blacks = []\n solid_whites = []\n \n for folder in sorted(data_folders):\n print(folder)\n for image in os.listdir(folder):\n if image_index >= max_count:\n raise Exception(\"More than %d images!\" % (max_count,))\n image_file = os.path.join(folder, image)\n if normalize_separator(image_file) in skip_list:\n continue\n try:\n raw_data = ndimage.imread(image_file)\n \n # Keep track of images a that are solid white or solid black.\n if np.all(raw_data == 0):\n solid_blacks.append(image_file)\n if np.all(raw_data == int(pixel_depth)):\n solid_whites.append(image_file)\n \n # Convert to float and normalize.\n image_data = (raw_data.astype(float) - pixel_depth / 2) / pixel_depth\n\n if image_data.shape != (image_size, image_size):\n raise Exception(\"Unexpected image shape: %s\" % str(image_data.shape))\n\n # Capture the image data and label.\n dataset[image_index, :, :] = image_data\n labels[image_index] = label_index\n image_index += 1\n except IOError as e:\n skip_list.append(normalize_separator(image_file))\n print(\"Could not read:\", image_file, ':', e, \"skipping.\")\n label_index += 1\n image_count = image_index\n # Trim down to just the used portion of the arrays.\n dataset = dataset[0:image_count, :, :]\n labels = labels[0:image_count]\n if image_count < min_count:\n raise Exception('Many fewer images than expected: %d < %d' %\n (num_images, min_num_images))\n print(\"Input data shape:\", dataset.shape)\n print(\"Mean of all normalized pixels:\", np.mean(dataset))\n print(\"Standard deviation of normalized pixels:\", np.std(dataset))\n print('Labels shape:', labels.shape)\n print(\"Found\", len(solid_whites), \"solid white images, and\",\n len(solid_blacks), \"solid black images.\")\n return dataset, labels\n\ntrain_dataset, train_labels = load(train_folders, \"train\", 450000, 550000)\ntest_dataset, test_labels = load(test_folders, 'test', 18000, 20000)\n\nskip_list", "Verify Proccessed Data", "exemplar = plt.imshow(train_dataset[0])\ntrain_labels[0]\n\nexemplar = plt.imshow(train_dataset[373])\ntrain_labels[373]\n\nexemplar = plt.imshow(test_dataset[18169])\ntest_labels[18169]\n\nexemplar = plt.imshow(train_dataset[-9])\ntrain_labels[-9]", "Compress and Store Data", "pickle_file = 'notMNIST/full.pickle'\n\ntry:\n f = gzip.open(pickle_file, 'wb')\n save = {\n 'train_dataset': train_dataset,\n 'train_labels': train_labels,\n 'test_dataset': test_dataset,\n 'test_labels': test_labels\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nstatinfo = os.stat(pickle_file)\nprint('Compressed pickle size:', statinfo.st_size)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hunterherrin/phys202-2015-work
assignments/assignment07/AlgorithmsEx02.ipynb
mit
[ "Algorithms Exercise 2\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nimport numpy as np", "Peak finding\nWrite a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:\n\nProperly handle local maxima at the endpoints of the input array.\nReturn a Numpy array of integer indices.\nHandle any Python iterable as input.", "np.array(range(5)).max()\n\nlist(range(1,5))\nfind_peaks([2,0,1,0,2,0,1])\n\ndef find_peaks(a):\n \"\"\"Find the indices of the local maxima in a sequence.\"\"\"\n b=[]\n c=np.array(a)\n if c[0]>c[1]:\n b.append(0)\n for i in range(1,len(c)-1):\n if c[i]>c[i-1] and c[i]>c[i+1]:\n b.append(i)\n if c[len(c)-1]>c[len(c)-2]:\n b.append(len(c)-1)\n return b\n\np1 = find_peaks([2,0,1,0,2,0,1])\nassert np.allclose(p1, np.array([0,2,4,6]))\np2 = find_peaks(np.array([0,1,2,3]))\nassert np.allclose(p2, np.array([3]))\np3 = find_peaks([3,2,1,0])\nassert np.allclose(p3, np.array([0]))", "Here is a string with the first 10000 digits of $\\pi$ (after the decimal). Write code to perform the following:\n\nConvert that string to a Numpy array of integers.\nFind the indices of the local maxima in the digits of $\\pi$.\nUse np.diff to find the distances between consequtive local maxima.\nVisualize that distribution using an appropriately customized histogram.", "from sympy import pi, N\npi_digits_str = str(N(pi, 10001))[2:]\n\nfirst_10000=np.array(list(pi_digits_str), dtype=int)\npeaks=find_peaks(first_10000)\ndifferences=np.diff(peaks)\nplt.figure(figsize=(10,10))\nplt.hist(differences, 20, (1,20))\nplt.title('Hoe Far Apart the Local Maxima of the First 10,0000 Digits of $\\pi$ Are')\nplt.ylabel('Number of Occurences')\nplt.xlabel('Distance Apart')\nplt.tight_layout()\n\nassert True # use this for grading the pi digits histogram" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
autism-research-centre/Autism-Gradients
6b_networks-inside-gradients.ipynb
gpl-3.0
[ "6b Calculate binned gradient-network overlap\nThis file works out the average z-score inside a gradient percentile area\nwritten by Jan Freyberg for the Brainhack 2017 Project_\nThis should reproduce this analysis", "% matplotlib inline \n\nfrom __future__ import print_function\n\nimport nibabel as nib\nfrom nilearn.image import resample_img\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nimport os\nimport os.path\n\n# The following are a progress bar, these are not strictly necessary:\nfrom ipywidgets import FloatProgress\nfrom IPython.display import display", "Define the variables for this analysis. \n1. how many percentiles the data is divided into\n2. where the Z-Maps (from neurosynth) lie\n3. where the binned gradient maps lie\n4. where a mask of the brain lies (not used at the moment).", "percentiles = range(10)\n\n# unthresholded z-maps from neurosynth:\nzmaps = [os.path.join(os.getcwd(), 'ROIs_Mask', fname) for fname in os.listdir(os.path.join(os.getcwd(), 'ROIs_Mask'))\n if 'z.nii' in fname]\n\n# individual, binned gradient maps, in a list of lists:\ngradmaps = [[os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile), fname)\n for fname in os.listdir(os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile)))]\n for percentile in percentiles]\n\n# a brain mask file:\nbrainmaskfile = os.path.join(os.getcwd(), 'ROIs_Mask', 'rbgmask.nii')", "Next define a function to take the average of an image inside a mask and return it:", "def zinsidemask(zmap, mask):\n # \n zaverage = zmap.dataobj[\n np.logical_and(np.not_equal(mask.dataobj, 0), brainmask.dataobj>0)\n ].mean()\n return zaverage", "This next cell will step through each combination of gradient, subject and network file to calculate the average z-score inside the mask defined by the gradient percentile. This will take a long time to run!", "zaverages = np.zeros([len(zmaps), len(gradmaps), len(gradmaps[0])])\n\n# load first gradmap just for resampling\ngradmap = nib.load(gradmaps[0][0])\n\n# Load a brainmask\nbrainmask = nib.load(brainmaskfile)\nbrainmask = resample_img(brainmask, target_affine=gradmap.affine, target_shape=gradmap.shape)\n\n# Initialise a progress bar:\nprogbar = FloatProgress(min=0, max=zaverages.size)\ndisplay(progbar)\n\n# loop through the network files:\nfor i1, zmapfile in enumerate(zmaps):\n # load the neurosynth activation file:\n zmap = nib.load(zmapfile)\n # make sure the images are in the same space:\n zmap = resample_img(zmap,\n target_affine=gradmap.affine,\n target_shape=gradmap.shape)\n # loop through the bins:\n for i2, percentile in enumerate(percentiles):\n # loop through the subjects:\n for i3, gradmapfile in enumerate(gradmaps[percentile]):\n gradmap = nib.load(gradmapfile) # load image\n zaverages[i1, i2, i3] = zinsidemask(zmap, gradmap) # calculate av. z-score\n progbar.value += 1 # update progressbar (only works in jupyter notebooks)\n", "To save time next time, we'll save the result of this to file:", "# np.save(os.path.join(os.getcwd(), 'data', 'average-abs-z-scores'), zaverages)\n\nzaverages = np.load(os.path.join(os.getcwd(), 'data', 'average-z-scores.npy'))", "Extract a list of which group contains which participants.", "df_phen = pd.read_csv('data' + os.sep + 'SelectedSubjects.csv')\ndiagnosis = df_phen.loc[:, 'DX_GROUP']\nfileids = df_phen.loc[:, 'FILE_ID']\n\ngroupvec = np.zeros(len(gradmaps[0]))\nfor filenum, filename in enumerate(gradmaps[0]):\n fileid = os.path.split(filename)[-1][5:-22]\n groupvec[filenum] = (diagnosis[fileids.str.contains(fileid)])\n\nprint(groupvec.shape)", "Make a plot of the z-scores inside each parcel for each gradient, split by group!", "fig = plt.figure(figsize=(15, 8))\ngrouplabels = ['Control group', 'Autism group']\nfor group in np.unique(groupvec):\n \n ylabels = [os.path.split(fname)[-1][0:-23].replace('_', ' ') for fname in zmaps]\n # remove duplicates!\n includenetworks = []\n seen = set()\n for string in ylabels:\n includenetworks.append(string not in seen)\n seen.add(string)\n \n ylabels = [string for index, string in enumerate(ylabels) if includenetworks[index]]\n \n tmp_zaverages = zaverages[includenetworks, :, :]\n tmp_zaverages = tmp_zaverages[:, :, groupvec==group]\n \n tmp_zaverages = tmp_zaverages[np.argsort(np.argmax(tmp_zaverages.mean(axis=2), axis=1)), :, :]\n \n # make the figure\n plt.subplot(1, 2, group)\n cax = plt.imshow(tmp_zaverages.mean(axis=2),\n cmap='bwr', interpolation='nearest',\n vmin=zaverages.mean(axis=2).min(),\n vmax=zaverages.mean(axis=2).max())\n \n ax = plt.gca()\n plt.title(grouplabels[int(group-1)])\n\n plt.xlabel('Percentile of principle gradient')\n ax.set_xticks(np.arange(0, len(percentiles), 3))\n ax.set_xticklabels(['100-90', '70-60', '40-30', '10-0'])\n \n ax.set_yticks(np.arange(0, len(seen), 1))\n ax.set_yticklabels(ylabels)\n\n ax.set_yticks(np.arange(-0.5, len(seen), 1), minor=True)\n ax.set_xticks(np.arange(-0.5, 10, 1), minor=True)\n ax.grid(which='minor', color='w', linewidth=2)\n \n fig.subplots_adjust(right=0.8)\n cbar_ax = fig.add_axes([0.85, 0.15, 0.01, 0.7])\n fig.colorbar(cax, cax=cbar_ax, label='Average Z-Score')\n #fig.colorbar(cax, cmap='bwr', orientation='horizontal')\n\nplt.savefig('./figures/z-scores-inside-gradient-bins.png')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AlJohri/DAT-DC-12
notebooks/nlp_spacy.ipynb
mit
[ "Intro Spacy", "!pip install spacy nltk", "Spacy Documentation\nSpacy is an NLP/Computational Linguistics package built from the ground up. It's written in Cython so it's fast!!\nLet's check it out. Here's some text from Alice in Wonderland free on Gutenberg.", "text = \"\"\"'Please would you tell me,' said Alice, a little timidly, for she was not quite sure whether it was good manners for her to speak first, 'why your cat grins like that?'\n'It's a Cheshire cat,' said the Duchess, 'and that's why. Pig!'\nShe said the last word with such sudden violence that Alice quite jumped; but she saw in another moment that it was addressed to the baby, and not to her, so she took courage, and went on again:—\n'I didn't know that Cheshire cats always grinned; in fact, I didn't know that cats could grin.'\n'They all can,' said the Duchess; 'and most of 'em do.'\n'I don't know of any that do,' Alice said very politely, feeling quite pleased to have got into a conversation.\n'You don't know much,' said the Duchess; 'and that's a fact.'\"\"\"", "Download and load the model. SpaCy has an excellent English NLP processor. It has the following features which we shall explore:\n- Entity recognition\n- Dependency Parsing\n- Part of Speech tagging\n- Word Vectorization\n- Tokenization\n- Lemmatization\n- Noun Chunks\nDownload the Model, it may take a while", "import spacy\nimport spacy.en.download\n# spacy.en.download.main()\nprocessor = spacy.en.English()\n\nprocessed_text = processor(text)\nprocessed_text", "Looks like the same text? Let's dig a little deeper\nTokenization\nSentences", "n = 0\nfor sentence in processed_text.sents:\n print(n, sentence)\n n+=1", "Words and Punctuation - Along with POS tagging", "n = 0\nfor sentence in processed_text.sents:\n for token in sentence:\n print(n, token, token.pos_, token.lemma_)\n n+=1", "Entities - Explanation of Entity Types", "for entity in processed_text.ents:\n print(entity, entity.label_)", "Noun Chunks", "for noun_chunk in processed_text.noun_chunks:\n print(noun_chunk)", "The Semi Holy Grail - Syntactic Depensy Parsing See Demo for clarity", "def pr_tree(word, level):\n if word.is_punct:\n return\n for child in word.lefts:\n pr_tree(child, level+1)\n print('\\t'* level + word.text + ' - ' + word.dep_)\n for child in word.rights:\n pr_tree(child, level+1)\n\nfor sentence in processed_text.sents:\n pr_tree(sentence.root, 0)\n print('-------------------------------------------')", "What is 'nsubj'? 'acomp'? See The Universal Dependencies\nWord Vectorization - Word2Vec", "proc_fruits = processor('''I think green apples are delicious. \n While pears have a strange texture to them. \n The bowls they sit in are ugly.''')\napples, pears, bowls = proc_fruits.sents\nfruit = processed_text.vocab['fruit']\nprint(apples.similarity(fruit))\nprint(pears.similarity(fruit))\nprint(bowls.similarity(fruit))\n", "Assingment - In Class\nFind your favorite news source and grab the article text.\n\nShow the most common words in the article.\nShow the most common words under a part of speech. (i.e. NOUN: {'Bob':12, 'Alice':4,})\nFind a subject/object relationship through the dependency parser in any sentence.\nShow the most common Entities and their types. \nFind Entites and their dependency (hint: entity.root.head)\nFind the most similar words in the article" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
probml/pyprobml
notebooks/misc/dropout_MLP_torch.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/supplements/dropout_MLP_torch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nDropout in an MLP\nBased on sec 4.6 of\nhttp://d2l.ai/chapter_multilayer-perceptrons/dropout.html", "import numpy as np\nimport matplotlib.pyplot as plt\n\nnp.random.seed(seed=1)\nimport math\n\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\n\n!mkdir figures # for saving plots\n\n!wget https://raw.githubusercontent.com/d2l-ai/d2l-en/master/d2l/torch.py -q -O d2l.py\nimport d2l", "Add dropout layer by hand to an MLP", "def dropout_layer(X, dropout):\n assert 0 <= dropout <= 1\n # In this case, all elements are dropped out\n if dropout == 1:\n return torch.zeros_like(X)\n # In this case, all elements are kept\n if dropout == 0:\n return X\n mask = (torch.Tensor(X.shape).uniform_(0, 1) > dropout).float()\n return mask * X / (1.0 - dropout)\n\n# quick test\ntorch.manual_seed(0)\nX = torch.arange(16, dtype=torch.float32).reshape((2, 8))\nprint(X)\nprint(dropout_layer(X, 0.0))\nprint(dropout_layer(X, 0.5))\nprint(dropout_layer(X, 1.0))\n\n# A common trend is to set a lower dropout probability closer to the input layer\nclass Net(nn.Module):\n def __init__(\n self, num_inputs, num_outputs, num_hiddens1, num_hiddens2, is_training=True, dropout1=0.2, dropout2=0.5\n ):\n super(Net, self).__init__()\n self.dropout1 = dropout1\n self.dropout2 = dropout2\n self.num_inputs = num_inputs\n self.training = is_training\n self.lin1 = nn.Linear(num_inputs, num_hiddens1)\n self.lin2 = nn.Linear(num_hiddens1, num_hiddens2)\n self.lin3 = nn.Linear(num_hiddens2, num_outputs)\n self.relu = nn.ReLU()\n\n def forward(self, X):\n H1 = self.relu(self.lin1(X.reshape((-1, self.num_inputs))))\n # Use dropout only when training the model\n if self.training == True:\n # Add a dropout layer after the first fully connected layer\n H1 = dropout_layer(H1, self.dropout1)\n H2 = self.relu(self.lin2(H1))\n if self.training == True:\n # Add a dropout layer after the second fully connected layer\n H2 = dropout_layer(H2, self.dropout2)\n out = self.lin3(H2)\n return out", "Fit to FashionMNIST\nUses the d2l.load_data_fashion_mnist function.", "train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=256)", "Fit model using SGD.\nUses the d2l.train_ch3 function.", "torch.manual_seed(0)\n# We pick a wide model to cause overfitting without dropout\nnum_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256\nnet = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2, dropout1=0.5, dropout2=0.5)\nloss = nn.CrossEntropyLoss()\nlr = 0.5\ntrainer = torch.optim.SGD(net.parameters(), lr=lr)\nnum_epochs = 10\nd2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)", "When we turn dropout off, we notice a slightly larger gap between train and test accuracy.", "torch.manual_seed(0)\nnet = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2, dropout1=0.0, dropout2=0.0)\nloss = nn.CrossEntropyLoss()\ntrainer = torch.optim.SGD(net.parameters(), lr=lr)\nnum_epochs = 10\nd2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)", "Dropout using PyTorch layer", "dropout1 = 0.5\ndropout2 = 0.5\nnet = nn.Sequential(\n nn.Flatten(),\n nn.Linear(num_inputs, num_hiddens1),\n nn.ReLU(),\n # Add a dropout layer after the first fully connected layer\n nn.Dropout(dropout1),\n nn.Linear(num_hiddens2, num_hiddens1),\n nn.ReLU(),\n # Add a dropout layer after the second fully connected layer\n nn.Dropout(dropout2),\n nn.Linear(num_hiddens2, num_outputs),\n)\n\n\ndef init_weights(m):\n if type(m) == nn.Linear:\n nn.init.normal_(m.weight, std=0.01)\n\n\ntorch.manual_seed(0)\nnet.apply(init_weights);\n\ntrainer = torch.optim.SGD(net.parameters(), lr=lr)\nd2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)", "Visualize some predictions", "def display_predictions(net, test_iter, n=6):\n # Extract first batch from iterator\n for X, y in test_iter:\n break\n # Get labels\n trues = d2l.get_fashion_mnist_labels(y)\n preds = d2l.get_fashion_mnist_labels(d2l.argmax(net(X), axis=1))\n # Plot\n titles = [true + \"\\n\" + pred for true, pred in zip(trues, preds)]\n d2l.show_images(d2l.reshape(X[0:n], (n, 28, 28)), 1, n, titles=titles[0:n])\n\n# d2l.predict_ch3(net, test_iter)\ndisplay_predictions(net, test_iter)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robcarver17/pysystemtrade
examples/introduction/asimpletradingrule.ipynb
gpl-3.0
[ "Simple Trading Rule", "from sysdata.sim.csv_futures_sim_data import csvFuturesSimData\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Work up a minimum example of a trend following system\nLet's get some data\nWe can get data from various places; however for now we're going to use\nprepackaged 'legacy' data stored in csv files", "data = csvFuturesSimData()\ndata", "We get stuff out of data with methods", "print(data.get_instrument_list())\nprint(data.get_raw_price(\"EDOLLAR\").tail(5))", "data can also behave in a dict like manner (though it's not a dict)", "data['SP500']\n\ndata.keys()", "... however this will only access prices\n(note these prices have already been backadjusted for rolls)\nWe have extra futures data here", "data.get_instrument_raw_carry_data(\"EDOLLAR\").tail(6)", "Technical note: csvFuturesSimData inherits from FuturesData which itself inherits from simData\nThe chain is 'data specific' <- 'asset class specific' <- 'generic'\nLet's create a simple trading rule\nNo capping or scaling", "import pandas as pd\nfrom sysquant.estimators.vol import robust_vol_calc\n\n\ndef calc_ewmac_forecast(price, Lfast, Lslow=None):\n \"\"\"\n Calculate the ewmac trading rule forecast, given a price and EWMA speeds\n Lfast, Lslow and vol_lookback\n\n \"\"\"\n # price: This is the stitched price series\n # We can't use the price of the contract we're trading, or the volatility\n # will be jumpy\n # And we'll miss out on the rolldown. See\n # https://qoppac.blogspot.com/2015/05/systems-building-futures-rolling.html\n\n price = price.resample(\"1B\").last()\n\n if Lslow is None:\n Lslow = 4 * Lfast\n\n # We don't need to calculate the decay parameter, just use the span\n # directly\n fast_ewma = price.ewm(span=Lfast).mean()\n slow_ewma = price.ewm(span=Lslow).mean()\n raw_ewmac = fast_ewma - slow_ewma\n vol = robust_vol_calc(price.diff())\n return raw_ewmac / vol", "Try it out\n(this isn't properly scaled at this stage of course)", "instrument_code = 'EDOLLAR'\nprice = data.daily_prices(instrument_code)\newmac = calc_ewmac_forecast(price, 32, 128)\newmac.columns = ['forecast']\newmac.tail(5)\n\newmac.plot();\nplt.title('Forecast')\nplt.ylabel('Position')\nplt.xlabel('Time')", "Did we make money?", "from systems.accounts.account_forecast import pandl_for_instrument_forecast\naccount = pandl_for_instrument_forecast(forecast=ewmac, price = price)\naccount.curve().plot();\nplt.title('Profit and Loss')\nplt.ylabel('PnL')\nplt.xlabel('Time');\n\naccount.percent.stats()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robblack007/clase-cinematica-robot
Practicas/practica5/Problemas.ipynb
mit
[ "Problemas\nDefina una función que obtenga la cinemática inversa de un pendulo doble.", "def ci_pendulo_doble(x, y):\n # tome en cuenta que las longitudes de los eslabones son 2 y 2\n l1, l2 = 2, 2\n from numpy import arccos, arctan2, sqrt\n # YOUR CODE HERE\n raise NotImplementedError()\n return q1, q2\n\nfrom numpy.testing import assert_allclose\nassert_allclose(ci_pendulo_doble(4, 0), (0,0))\nassert_allclose(ci_pendulo_doble(0, 4), (1.57079632,0))", "Obtenga las posiciones en el espacio articular, $q_1$ y $q_2$, necesarias para que el punto final del pendulo doble llegue a las coordenadas $p_1 = (0,1)$, $p_2 = (1,3)$ y $p_3 = (3,2)$.", "# YOUR CODE HERE\nraise NotImplementedError()\n\nfrom numpy.testing import assert_allclose\nassert_allclose((q11, q21),(0.25268 , 2.636232), rtol=1e-05, atol=1e-05)\n\nfrom numpy.testing import assert_allclose\nassert_allclose((q12, q22),(0.589988, 1.318116), rtol=1e-05, atol=1e-05)\n\nfrom numpy.testing import assert_allclose\nassert_allclose((q13, q23),(0.14017 , 0.895665), rtol=1e-05, atol=1e-05)", "Genere las trayectorias necesarias para que el pendulo doble se mueva del punto $p_1$ al punto $p_2$ en $2s$, del punto $p_2$ al punto $p_3$ en $2s$ y del punto $p_3$ al punto $p_1$ en $2s$.\n\nUtiliza 100 puntos por segundo y asegurate de guardar las trayectorias generadas en las variables correctas para que q1s y q2s tengan las trayectorias completas.", "from generacion_trayectorias import grafica_trayectoria\n# YOUR CODE HERE\nraise NotImplementedError()\nq1s = q1s1 + q1s2 + q1s3\nq2s = q2s1 + q2s2 + q2s3\n\nfrom numpy.testing import assert_allclose\nassert_allclose((q1s[0], q1s[-1]),(0.25268, 0.25268), rtol=1e-05, atol=1e-05)\n\nfrom numpy.testing import assert_allclose\nassert_allclose((q2s[0], q2s[-1]),(2.636232, 2.636232), rtol=1e-05, atol=1e-05)", "Cree una animación con las trayectorias generadas y las funciones proporcionadas a continuación (algunas funciones estan marcadas con comentarios en donde hace falta agregar código).", "from matplotlib.pyplot import figure, style\nfrom matplotlib import animation, rc\nrc('animation', html='html5')\nfrom numpy import sin, cos, arange\n\nfig = figure(figsize=(8, 8))\naxi = fig.add_subplot(111, autoscale_on=False, xlim=(-0.6, 3.1), ylim=(-0.6, 3.1))\nlinea, = axi.plot([], [], \"-o\", lw=2, color='gray')\n\ndef cd_pendulo_doble(q1, q2):\n l1, l2 = 2, 2\n # YOUR CODE HERE\n raise NotImplementedError()\n return xs, ys\n\ndef inicializacion():\n '''Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema'''\n linea.set_data([], [])\n return linea\n\ndef animacion(i):\n '''Esta funcion se ejecuta para cada cuadro del GIF'''\n # YOUR CODE HERE\n raise NotImplementedError()\n linea.set_data(xs, ys)\n \n return linea\n\nani = animation.FuncAnimation(fig, animacion, arange(1, len(q1s)), interval=10, init_func=inicializacion)\nani\n\nfrom numpy.testing import assert_allclose\nassert_allclose(cd_pendulo_doble(0, 0), ([0,2,4], [0,0,0]), rtol=1e-05, atol=1e-05)\nassert_allclose(cd_pendulo_doble(1.57079632,0), ([0, 0, 0],[0, 2, 4]), rtol=1e-05, atol=1e-05)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
minh5/cpsc
reports/api data.ipynb
mit
[ "Introduction\nThis notebook serves as a reporting tool for the CPSC. In this notebook, I laid out the questions CPSC is interested in learning from their SaferProduct API. The format will be that there are a few questions presented and each question will have a findings section where there is a quick summary of the findings while in Section 4, there will be further information on how on the findings were conducted.\nAnalysis\nGiven that the API was down during this time of the reporting, I obtained data from Ana Carolina Areias via Dropbox link. Here I cleaned up the pure JSON format and converted it into a dataframe (the cleaning code can be found in the exploratory.ipynb in the /notebook directory. After that I saved the data using pickle where I can easily load it up for analysis.\nThe questions answered here are the result of a conversation between DKDC and the CPSC regarding their priorities and what information is available from the data.\nThe main takeaways from this analysis is that:\n\nFrom the self-reported statistics of people who reported their injury to the API, it appears that there is a skew against people who are older. The data shows that people who are reporting are 40-60 years old.\nAn overwhelming amount of reports did not involve bodily harm or require medical attention; much of the reports were just incident reports with a particular product\nOut of the reports that resulted in some harm, the most reported product was in the footwear category regarding some harm and discomfort with walking with the Sketchers Tone-Ups shoes\nAlthough not conclusive, but from the reports, there appears to be come indication that there are a lot of fire-related incidents from a cursory examination of the most popular words", "import pickle\nimport operator\n\nimport numpy as np\nimport pandas as pd \nimport gensim.models\n\ndata = pickle.load(open('/home/datauser/cpsc/data/processed/cleaned_api_data', 'rb'))\ndata.head()", "Are there certain populations we're not getting reports from?\nWe can create a basic cross tab between age and gender to see if there are any patterns that emerges.", "pd.crosstab(data['GenderDescription'], data['age_range'])", "From the data, it seems that there's not much underrepresentation by gender. There are only around a thousand less males than females in a dataset of 28,000. Age seems to be a bigger issue. There appears to be a lack of representation of older people using the API. Given that older folks may be less likely to self report, or if they wanted to self report, they may not be tech-savvy enough to use with a web interface. My assumption that people over 70 are probably experience product harm at a higher rate and are not reporting this.\nIf we wanted to raise awareness about a certain tool or item, where should we focus our efforts\nTo construct this, I removed any incidents that did not cause any bodily harm and taking the top ten categories. There were several levels of severity. We can remove complaints that does not involve any physical harm. After removing these complaint, it is really interesting to see that \"Footwear\" was the product category of harm.", "#removing minor harm incidents\nno_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',\n 'No Incident, No Injury', 'No First Aid or Medical Attention Received']\ndamage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]\ndamage.ProductCategoryPublicName.value_counts()[0:9]", "This is actually preplexing, so I decided to investigate further by analyzing the complaints filed for the \"Footwear\" category. To do this, I created a Word2Vec model that uses a convolution neural network for text analysis. This process maps a word and the linguistic context it is in to be able to calculate similarity between words. The purpose of this is to find words that related to each other. Rather than doing a simple cross tab of product categories, I can ingest the complaints and map out their relationship. For instance, using the complaints that resulted in bodily harm, I found that footwear was associated with pain and walking. It seems that there is injuries related to Sketcher sneakers specifically since it was the only brand that showed up enough to be included in the model's dictionary. In fact, there was a lawsuit regarding Sketchers and their toning shoes\nAre there certain complaints that people are filing? Quality issues vs injuries?\nLook below, we see that a vast majority are incidents with any bodily harm. Over 60% of all complaints were categorized as Incident, No Injury.", "data.SeverityTypeDescription.value_counts()", "Although, while it is label to have no injury, it does not necessarily mean that we can't take precaution. What I did was take the same approach as the previous model, I subsetted the data to only complaints that had \"no injury\" and ran a model to examine words used. From the analysis, we see that the word to, was, and it were the top three words. At first glance, it may seem that these words are meaningless, however if we examine words that are similar to it, we can start seeing a connection.\nFor instance, the word most closely related to \"to\" was \"unable\" and \"trying\", which conveys a sense of urgency in attempting to turn something on or off. Examining the words \"unable,\" I was able to see it was related to words such as \"attempted\" and \"disconnect.\" Further investigation lead me to find it was dealing with a switch or a plug, possibly dealing with an electrical item.\nA similar picture is painted when trying to examine the word \"was.\" The words that felt out of place was \"emitting\", \"extinguish,\" and \"smelled.\" It is not surprise that after a few investigations of these words, that words like \"sparks\" and \"smoke\" started popping up more. This leads me to believe that these complaints have something to do with encounters closely related to fire. \nSo while these complaints are maybe encounters with danger, it may be worthwile to review these complaints further with an eye out for fire related injuries or products that could cause fire.", "model.most_similar('was')", "Who are the people who are actually reporting to us?\nThis question is difficult to answer because of a lack of data on the reporter. From the cross tabulation in Section 3.1, we see that the majority of our the respondents are female and the largest age group are 40-60. That is probably the best guess of who are the people who are using the API.\nConclusion\nThis is meant to serve as a starting point on examining the API data. The main findings were that:\n\nFrom the self-reported statistics of people who reported their injury to the API, it appears that there is a skew against people who are older. The data shows that people who are reporting are 40-60 years old.\nAn overwhelming amount of reports did not involve bodily harm or require medical attention; much of the reports were just incident reports with a particular product\nOut of the reports that resulted in some harm, the most reported product was in the footwear category regarding some harm and discomfort with walking with the Sketchers Tone-Ups shoes\nAlthough not conclusive, but from the reports, there appears to be come indication that there are a lot of fire-related incidents from a cursory examination of the most popular words\n\nWhile text analysis is helpful, it is often not sufficient. What would really help the analysis process would be include more information from the user. The following information would be helpful to collect to make conduct more actionable insight.\n\nEthnicity/Race\nSelf Reported Income\nGeographic information\nRegion (Mid Atlantic, New England, etc)\nClosest Metropolitan Area\nState\nCity\nGeolocation of IP address \ncoordinates can be \"jittered\" to conserve anonymity\n\nA great next step would be a deeper text analysis on shoes. It may be possible to train a neural network to consider smaller batches of words so we can capture the context better. Other steps that I would do if I had more time would be to find a way to fix up unicode issues with some of the complaints (there were special characters that prevented some of the complaints to be converted into strings). I would also look further into the category that had the most overall complaints: \"Electric Ranges and Stoves\" and see what the complaints were.\nIf we could implement these challenges, there is no doubt we could gain some valuable insights on products that are harming Americans. This report serves as the first step. I would like to thank CPSC for this data set and DKDC for the opportunity to conduct this analysis.\nReferences\nQuestion 2.1\nThe data that we worked with had limited information regarding the victim's demographics beside age and gender. However, that was enough to draw some base inferences. Below we can grab a counts of gender, which a plurality is females.\nAge is a bit tricky, we have the victim's birthday in months. I converted it into years and break them down into 10 year age ranges so we can better examine the data.", "data.GenderDescription.value_counts()\n\ndata['age'] = map(lambda x: x/12, data['VictimAgeInMonths'])\nlabels = ['under 10', '10-20', '20-30', '30-40', '40-50', '50-60',\n '60-70','70-80', '80-90', '90-100', 'over 100']\ndata['age_range'] = pd.cut(data['age'], bins=np.arange(0,120,10), labels=labels)\ndata['age_range'][data['age'] > 100] = 'over 100'\n\ncounts = data['age_range'].value_counts()\ncounts.sort()\ncounts", "However after doing this, we still have around 13,000 people with an age of zero, whether it is that they did not fill in the age or that the incident involves infant is still unknown but looking at the distribution betweeen of the product that are affecting people with an age of 0 and the overall dataset, it appears that null values in the age range represents people who did not fill out an age when reporting", "#Top products affect by people with 0 age\ndata.ix[data['age_range'].isnull(), 'ProductCategoryPublicName'].value_counts()[0:9]\n\n#top products that affect people overall\ndata.ProductCategoryPublicName.value_counts()[0:9]", "Question 2.2\nAt first glance, we can look at the products that were reported, like below. And see that Eletric Ranges or Ovens is at top in terms of harm. However, there are levels of severity within the API that needs to be filtered before we can assess which products causes the most harm.", "#overall products listed\ndata.ProductCategoryPublicName.value_counts()[0:9]\n\n#removing minor harm incidents\nno_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',\n 'No Incident, No Injury', 'No First Aid or Medical Attention Received']\ndamage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]\ndamage.ProductCategoryPublicName.value_counts()[0:9]", "This shows that incidents where there are actually injuries and medical attention was given was that in footwear, which was weird. To explore this, I created a Word2Vec model that maps out how certain words relate to each other. To train the model, I used the comments that were made from the API. This will train a model and help us identify words that are similar. For instance, if you type in foot, you will get left and right as these words are most closely related to the word foot. However after some digging around, I found out that the word \"walking\" was associated with \"painful\". I have some reason to believe that there are orthopedic injuries associated with shoes and people have been experience pain while walking with Sketchers that were supposed to tone up their bodies and having some instability or balance issues.", "model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/footwear')\nmodel.most_similar('walking')\n\nmodel.most_similar('injury')\n\nmodel.most_similar('instability')", "Question 2.3", "model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/severity')\nitems_dict = {}\nfor word, vocab_obj in model.vocab.items():\n items_dict[word] = vocab_obj.count\nsorted_dict = sorted(items_dict.items(), key=operator.itemgetter(1))\nsorted_dict.reverse()\nsorted_dict[0:5]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
okartal/popgen-systemsX
exercises.ipynb
cc0-1.0
[ "Population Genetics\nÖnder Kartal, University of Zurich \nThis is a collection of elementary exercises that introduces you to the most fundamental concepts of population genetics. We use Python to explore these topics and solve problems.\nThe exercises have been chosen for a one day workshop on modeling with 2.5 hours exercises preceded by approx. 3 hours of lectures (a primer on population genetics and probability theory). Evidently, it is not possible to cover a lot of material in this time; but upon finishing this workshop, you should feel comfortable picking up a textbook on population genetics and exploring the many software packages that are available for population genetics.\nNote: You can skip the exercises marked by an asterisk and tackle them if time permits.\nPreliminaries\nAll exercises can in principle be solved by only using the Python standard library and a plotting library. However, if you like and it feels more comfortable to you, you can use as well the libraries numpy and pandas. Note, that you have a link to the documentation of Python and standard scientific libraries in the \"Help\" menu of the Jupyter/IPython notebook.\nIPython has so-called magic commands (starting with %) to facilitate certain tasks. In our case, we want to import libraries for efficient handling of numeric data (numpy) and for plotting data (matplotlib). Evaluate the following two commands by pressing shift+enter in the cell; they import the necessary libraries and enable inline display of figures (it make take a few seconds).", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Let us define two vector variables (a regular sequence and a random one) and print them.", "x, y = np.arange(10), np.random.rand(10)\nprint(x, y, sep='\\n')", "The following command plots $y$ as a function of $x$ and labels the axes using $\\LaTeX$.", "plt.plot(x, y, linestyle='--', color='r', linewidth=2)\nplt.xlabel('time, $t$')\nplt.ylabel('frequency, $f$')", "From the tutorial: \"matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB.\"\nComment: The tutorial is a good starting point to learn about the most basic functionalities of matplotlib, especially if you are familiar with MATLAB. Matplotlib is a powerful library but sometimes too complicated for making statistical plots à la R. However, there are other libraries that, in part, are built on matplotlib and provide more convenient functionality for statistical use cases, especially in conjunction with the data structures that the library pandas provides (see pandas, seaborn, ggplot and many more).\nHardy-Weinberg Equilibrium\nThese exercises should make you comfortable with the fundamental notions of population genetics: allele and genotype frequencies, homo- and heterozygosity, and inbreeding.\nWe will use data from a classical paper on enzyme polymorphisms at the alkaline phosphatase (ALP) locus in humans (Harris 1966). In this case, the alleles have been defined in terms of protein properties. Harris could electrophoretically distinguish three proteins by their migration speed and called them S (slow), F (fast), and I (intermediate).\nWe use a Python dictionary to store the observed numbers of genotypes at the ALP locus in a sample from the English people.", "alp_genotype = {'obs_number':\n {'SS': 141, 'SF': 111, 'FF': 28, 'SI': 32, 'FI': 15, 'II': 5}\n }", "1. Calculate the observed genotype frequencies at the ALP locus.\n2. Calculate the observed allele frequencies at the ALP locus.\n3. Calculate the expected genotype frequencies if the ALP locus were in Hardy-Weinberg equilibrium.\n[$\\ast$] 4. Calculate the estimate of the inbreeding coefficient $F$ for the ALP locus.\nThe inbreeding coefficient is defined as\n$$F = 1 - \\frac{h_{\\mathrm{obs}}}{h_{\\mathrm{exp}}},$$\nwhere $h$ denotes the (observed and expected) frequency of heterozygotes. Can you interpret the result in simple terms?\nGenetic Drift\nNot all gametes that are produced by an organism pass over to the next generation. Due to numerous possible influences there is only a finite sample that contributes to the next generation. Therefore, not all alleles of a gene are guaranteed to appear in the next generation in proportions equal to those in the present generation. As long as we cannot specify a process that leads to a specific selection of alleles and we have no reason to believe that the allele itself has a bearing upon its selection, sampling is an undirected (i.e. random) cause of allele frequency changes in a population. We call such an undirected carry-over of genes genetic drift.\nGenetic drift does not introduce any new assumptions compared to the Hardy-Weinberg case; it just drops the assumption of infinite population size.\n5. Write a function that runs the Wright-Fisher model of genetic drift.\nThe function must at least take the following arguments:\n\nnumber of generations\nsize of the population (i.e. number of diploid individuals)\ninitial allele frequency (we have only two alleles, so considering a single allele is enough)\n\nThe function should return a list (or an array if you like) that represents the trajectory of the allele over the generations.\n6. Plot several trajectories (i.e. replicate populations) of the Wright-Fisher model and study genetic drift with different parameter values.\n\nWhat is the long-term behaviour of the locus?\nWhat is the effect of small/large population sizes on the trajectories?\nDo the trajectories of the replicate populations differ?\nDo rare alleles become extinct more often than abundant alleles?\n\n[$\\ast$] 7. Plot the distribution of allele frequency under genetic drift.\nThere is another way to look at the dynamics of a locus under genetic drift. If we have a large collection of replicate populations, we can take, at each time point, the allele frequencies of all these populations and plot a histogram. Thus, instead of looking at individual trajectories, we can observe how the distribution of this allele changes due to genetic drift across all replicate populations. This viewpoint of looking at a time-dependent probability density, is central for understanding the diffusion approximation to genetic drift (Kimura 1955).\nWrite a function that takes the output of the Wright-Fisher model, a list of generation times and plots a series of histograms of allele frequencies. What can you observe?\n8. Model genetic drift as a Markov chain.\nThe temporal evolution of the probability distribution is actually governed by a deterministic equation, the Markov chain. To simulate it, we have to only know the transition probabilities and the initial frequencies of all possible states. Since we are looking at a population, the possible states are given by the number of reference alleles $A$; for a population of $2N$ alleles, we have the states $X(A)=0, 1, 2, \\ldots, 2N$. The transition probability from $X(A)=i$ to $X(A)=j$ is given by the binomial distribution:\n$$T_{ij} = \\binom{2N}{j} \\left(\\frac{i}{2N}\\right)^j \\left(1-\\frac{i}{2N}\\right)^{2N-j}$$\n\nWrite a function that gives the transition matrix for a given population size $N$. Tip: For an efficient implementation, you can use the binom() function from scipy.stats. How can you test if your matrix is consistent?\nFor $N=4$, calculate the probability that a population with 4 copies of allele A transitions into a state with 3, 4, 5 copies. Why should these values be symmetric around 4 copy numbers? \nUse the function matrix_power from numpy.linalg to compute the distribution for 19 generations with the parameters $N=16$ and initial population frequency of the reference allele of $\\frac{1}{2}$. The state probability vector after $t$ transitions is given by\n$$p(t) = p(0)T^t$$\n\nMutation\nYou have seen yourself that genetic drift removes variation from the population. Since we can observe standing variation, it is evident that genetic drift cannot be the only evolutionary force. There must be something that causes variation. To a certain extent, new variants can arise in a population due to migration, that is an influx of new alleles. However, the ultimate cause of allelic variation is mutation.\nTo study the interplay of drift and mutation, we will focus on the decay of heterozygosity or the dynamics of the inbreeding coefficient. With mutation, inbreeding changes according to the formula\n$$ F_t = \\left[ \\frac{1}{2N} + \\left( 1 - \\frac{1}{2N} \\right) F_{t-1} \\right] \\left( 1 - u \\right)^2 $$\nwhere $(1-u)^2$ is the probability that no mutation occured in either of the two alleles and $u$ is the mutation probability (also called mutation rate).\n9. Simulate the dynamics of the inbreeding coefficient with and without mutation and observe the stationary state. Pick a population size not too small. Play with the number of generations.\nReferences\nGillespie (2004) Population Genetics: A Concise Guide The Johns Hopkins University Press\nHartl & Clark (2007) Principles of Population Genetics Sinauer Associates, Inc.\nOther Resources\nGenetic Simulation Resources" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/google-research
graph_sampler/molecule_sampling_demo.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/google-research/google-research/blob/master/graph_sampler/molecule_sampling_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2022 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.", "# Install graph_sampler\n!git clone https://github.com/google-research/google-research.git\n!pip install google-research/graph_sampler\n\nfrom rdkit import Chem\nimport rdkit.Chem.Draw\nfrom graph_sampler import molecule_sampler\nfrom graph_sampler import stoichiometry\nimport numpy as np", "Generating samples from a single stoichiometry", "stoich = stoichiometry.Stoichiometry({'C': 10, 'O': 2, 'N': 3, 'H': 16, 'F': 2, 'O-': 1, 'N+': 1})\nassert stoichiometry.is_valid(stoich), 'Cannot form a connected graph with this stoichiometry.'\nprint('Number of heavy atoms:', sum(stoich.counts.values()) - stoich.counts['H'])\n\n%%time\nsampler = molecule_sampler.MoleculeSampler(stoich,\n relative_precision=0.03,\n rng_seed=2044365744)\nweighted_samples = [graph for graph in sampler]\nstats = sampler.stats()\nrejector = molecule_sampler.RejectToUniform(weighted_samples,\n max_importance=stats['max_final_importance'],\n rng_seed=265580748)\nuniform_samples = [graph for graph in rejector]\nprint(f'generated {len(weighted_samples)}, kept {len(uniform_samples)}, '\n f'estimated total: {stats[\"estimated_num_graphs\"]:.2E} ± '\n f'{stats[\"num_graphs_std_err\"]:.2E}')\n\n#@title Draw some examples\nmols = [molecule_sampler.to_mol(g) for g in uniform_samples]\nChem.Draw.MolsToGridImage(mols[:8], molsPerRow=4, subImgSize=(200, 140))", "Combining samples from multiple stoichiometries\nHere we'll generate random molecules with 5 heavy atoms selected from C, N, and O. These small numbers are chosen just to illustrate the code. In this small an example, you could just enumerate all molecules. For large numbers of heavy atoms selected from a large set, you'd want to parallelize a lot of this.", "#@title Enumerate valid stoichiometries subject to the given constraint\nheavy_elements = ['C', 'N', 'O']\nnum_heavy = 5\n\n# We'll dump stoichiometries, samples, and statistics into a big dictionary.\nall_data = {}\nfor stoich in stoichiometry.enumerate_stoichiometries(num_heavy, heavy_elements):\n key = ''.join(stoich.to_element_list())\n all_data[key] = {'stoich': stoich} \n\nmax_key_size = max(len(k) for k in all_data.keys())\nprint(f'{len(all_data)} stoichiometries')\n\n#@title For each stoichiometry, generate samples and estimate the number of molecules\nfor key, data in all_data.items():\n sampler = molecule_sampler.MoleculeSampler(data['stoich'], relative_precision=0.2)\n data['weighted_samples'] = [graph for graph in sampler]\n stats = sampler.stats()\n data['stats'] = stats\n rejector = molecule_sampler.RejectToUniform(data['weighted_samples'],\n max_importance=stats['max_final_importance'])\n data['uniform_samples'] = [graph for graph in rejector]\n print(f'{key:>{max_key_size}}:\\tgenerated {len(data[\"weighted_samples\"])},\\t'\n f'kept {len(data[\"uniform_samples\"])},\\t'\n f'estimated total {int(stats[\"estimated_num_graphs\"])} ± {int(stats[\"num_graphs_std_err\"])}')\n\n#@title Combine into one big uniform sampling of the whole space\nbucket_sizes = [data['stats']['estimated_num_graphs'] for data in all_data.values()]\nsample_sizes = [len(data['uniform_samples']) for data in all_data.values()]\nbase_iters = [data['uniform_samples'] for data in all_data.values()]\n\naggregator = molecule_sampler.AggregateUniformSamples(bucket_sizes, sample_sizes, base_iters)\nmerged_uniform_samples = [graph for graph in aggregator]\n\ntotal_estimate = sum(data['stats']['estimated_num_graphs'] for data in all_data.values())\ntotal_variance = sum(data['stats']['num_graphs_std_err']**2 for data in all_data.values())\ntotal_std = np.sqrt(total_variance)\n\nprint(f'{len(merged_uniform_samples)} samples after merging, of an estimated '\n f'{total_estimate:.1f} ± {total_std:.1f}')\n\n#@title Draw some examples\nmols = [molecule_sampler.to_mol(g) for g in merged_uniform_samples]\nChem.Draw.MolsToGridImage(np.random.choice(mols, size=16), molsPerRow=4, subImgSize=(200, 140))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
LucaCanali/Miscellaneous
Oracle_Jupyter/Oracle_Jupyter_oracledb_pandas.ipynb
apache-2.0
[ "Oracle and Python with oracledb\nThis is an example of how to query Oracle from Python\nSetup and prerequisites\nThis is how you can setup an Oracle instance for testing using a docker image for oracle-xe \n1. run oracle xe on a container from gvenzl dockerhub repo https://github.com/gvenzl/oci-oracle-xe\ndocker run -d --name mydb1 -e ORACLE_PASSWORD=oracle -p 1521:1521 gvenzl/oracle-xe:latest # or use :slim\nwait till the DB is started, check logs at:\ndocker logs -f mydb1\n2. Install the scott/tiger schema with the emp table in PDB xepdb1:\ndocker exec -it mydb1 /bin/bash\nsed -e s=SCOTT/tiger=SCOTT/tiger@xepdb1= -e s/OFF/ON/ /opt/oracle/product/21c/dbhomeXE/rdbms/admin/utlsampl.sql &gt; script.sql\nsqlplus system/oracle@xepdb1 &lt;&lt;EOF\[email protected]\nEOF\nexit\noracledb library: This uses oracledb to connect to oracle, so no need to install the Oracle client.\nNote: oracledb can also work with the oracle client as cx_Oracle did,\nsee documentation for details.\nQuery Oracle from Python using the oracledb library", "# connect to Oracle using oracledb\n# !pip install oracledb\n\nimport oracledb\n\ndb_user = 'scott'\ndb_connect_string = 'localhost:1521/XEPDB1'\ndb_pass = 'tiger'\n\n# To avoid storig connection passwords use getpas or db_config\n# db_connect_string = 'dbserver:1521/orcl.mydomain.com'\n# import getpass\n# db_pass = getpass.getpass()\n\nora_conn = oracledb.connect(user=db_user, password=db_pass, dsn=db_connect_string)\n\n# open a cursor, run a query and fetch the results\n\ncursor = ora_conn.cursor()\ncursor.execute('select ename, sal from emp')\nres = cursor.fetchall()\ncursor.close()\n\nprint(res)", "oracledb integration with Pandas", "import pandas as pd\n\n# query Oracle using ora_conn and put the result into a pandas Dataframe\ndf_ora = pd.read_sql('select * from emp', con=ora_conn) \ndf_ora", "Use of bind variables", "df_ora = pd.read_sql('select * from emp where empno=:myempno', params={\"myempno\":7839}, \n con=ora_conn) \ndf_ora", "Basic visualization", "import matplotlib.pyplot as plt \nplt.style.use('seaborn-darkgrid')\n\ndf_ora = pd.read_sql('select ename \"Name\", sal \"Salary\" from emp', con=ora_conn) \n\nora_conn.close()\n\ndf_ora.plot(x='Name', y='Salary', title='Salary details, from Oracle demo table', \n figsize=(10, 6), kind='bar', color='blue');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Qumulo/python-notebooks
notebooks/File and Data Management.ipynb
gpl-3.0
[ "File and Data Management with Qumulo API python bindings\nThis jupyter notebook walks through some of the basics of file and data management with Qumulo API python bindings.", "import os\nimport re\nimport sys\nimport StringIO\nfrom qumulo.lib.request import RequestError\nfrom qumulo.rest_client import RestClient\n\n# set your environment variables or fill in the variables below\nAPI_HOSTNAME = os.environ['API_HOSTNAME'] if 'API_HOSTNAME' in os.environ else '{your-cluster-hostname}'\nAPI_USER = os.environ['API_USER'] if 'API_USER' in os.environ else '{api-cluster-user}'\nAPI_PASSWORD = os.environ['API_PASSWORD'] if 'API_PASSWORD' in os.environ else '{api-cluster-password}'\n\nrc = RestClient(API_HOSTNAME, 8000)\nrc.login(API_USER, API_PASSWORD)\nprint(\"logged in as: %(name)s\" % rc.auth.who_am_i())", "A few Qumulo API file and direcotory python bindings\nfs.create_directory\narguments:\n- name: Name of directory to be created\n- dir_path*: Destination path for the parent of created directory\n- dir_id*: Destination inode id for the parent of the created directory\n*Either dir_path or dir_id is required\n\nfs.create_file\narguments:\n- name: Name of file to be created\n- dir_path: Destination path for the directory of created file\n- dir_id: Destination inode id for the directory of the created file\n\nfs.write_file\narguments:\n- data_file: A python object of the local file's content\n- path: Destination file path on Qumulo \n- id_: Destination inode file id on Qumulo\n- if_match:\n\nfs.get_attr\narguments:\n- path:\n- id_:\n- snapshot:\nCreate a working directory for this exercise", "base_path = '/'\ndir_name = 'test-qumulo-fs-data'\n\ntry:\n the_dir_meta = rc.fs.create_directory(dir_path=base_path, name=dir_name)\n print(\"Successfully created %s%s.\" % (base_path, dir_name))\nexcept RequestError as e:\n print(\"** Exception: %s - Details: %s\\n\" % (e.error_class,e))\n if e.error_class == 'fs_entry_exists_error':\n the_dir_meta = rc.fs.get_attr(base_path + dir_name)\n\nfor k, v in the_dir_meta.iteritems():\n if re.search('(id|size|path|change_time)', k):\n print(\"%19s - %s\" % (k, v))", "Create a file in an existing path", "file_name = 'first-file.txt'\n\n# relies on the base path and direcotry name created in the code above.\ntry:\n the_file_meta = rc.fs.create_file(name=file_name, dir_path=base_path + dir_name)\nexcept RequestError as e:\n print(\"** Exception: %s - Details: %s\\n\" % (e.error_class,e))\n if e.error_class == 'fs_entry_exists_error':\n the_file_meta = rc.fs.get_attr(base_path + dir_name + '/' + file_name)\nprint(\"We've got a file. Its id is: %s\" % the_file_meta['id'])\n\n# writing a local file from /tmp/ to the qumulo cluster\nfw = open(\"/tmp/local-file-from-temp.txt\", \"w\")\nfw.write(\"Let's write 100 sentences on this virtual chalkboard\\n\" * 100)\nfw.close()\n\nwrite_file_meta = rc.fs.write_file(data_file=open(\"/tmp/local-file-from-temp.txt\"), \n path=base_path + dir_name + '/' + file_name)\n\nprint(\"\"\"name: %(path)s\nbytes: %(size)s\nmod time: %(modification_time)s\"\"\" % write_file_meta)\n\nstring_io_file_name = 'write-from-string-io.txt'\n\ntry:\n rc.fs.create_file(name=string_io_file_name, dir_path=base_path + dir_name)\nexcept RequestError as e:\n print(\"Exception: %s - Details: %s\\n\" % (e.error_class,e))\n\nfw = StringIO.StringIO()\nfw.write(\"Let's write 200 sentences on this virtual chalkboard\\n\" * 200)\nwrite_file_meta = rc.fs.write_file(data_file=fw, \n path=base_path + dir_name + '/' + string_io_file_name)\nfw.close()\nprint(\"\"\"name: %(path)s\nbytes: %(size)s\nmod time: %(modification_time)s\"\"\" % write_file_meta)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
InsightLab/data-science-cookbook
2019/09-clustering/cl_AlefCarneiro.ipynb
mit
[ "<p style=\"text-align: center;\">Clusterização e algoritmo K-means</p>\nOrganizar dados em agrupamentos é um dos modos mais fundamentais de compreensão e aprendizado. Como por exemplo, os organismos em um sistema biologico são classificados em domínio, reino, filo, classe, etc. A análise de agrupamento é o estudo formal de métodos e algoritmos para agrupar objetos de acordo com medidas ou características semelhantes. A análise de cluster, em sua essência, não utiliza rótulos de categoria que marcam objetos com identificadores anteriores, ou seja, rótulos de classe. A ausência de informação de categoria distingue o agrupamento de dados (aprendizagem não supervisionada) da classificação ou análise discriminante (aprendizagem supervisionada). O objetivo da clusterização é encontrar estruturas em dados e, portanto, é de natureza exploratória. \nA técnica de Clustering tem uma longa e rica história em uma variedade de campos científicos. Um dos algoritmos de clusterização mais populares e simples, o K-means, foi publicado pela primeira vez em 1955. Apesar do K-means ter sido proposto há mais de 50 anos e milhares de algoritmos de clustering terem sido publicados desde então, o K-means é ainda amplamente utilizado.\nFonte: Anil K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognition Letters, Volume 31, Issue 8, 2010\nObjetivo\n\nImplementar as funções do algoritmo KMeans passo-a-passo\nComparar a implementação com o algoritmo do Scikit-Learn\nEntender e codificar o Método do Cotovelo\nUtilizar o K-means em um dataset real \n\nCarregando os dados de teste\nCarregue os dados disponibilizados, e identifique visualmente em quantos grupos os dados parecem estar distribuídos.", "# import libraries\n\n# linear algebra\nimport numpy as np \n# data processing\nimport pandas as pd \n# data visualization\nfrom matplotlib import pyplot as plt \n# sys - to get maximum float value\nimport sys\n\n# load the data with pandas\nurl = 'https://raw.githubusercontent.com/InsightLab/data-science-cookbook/master/2019/09-clustering/dataset.csv'\ndataset = pd.read_csv(url, header=None)\ndataset = np.array(dataset)\n\nplt.scatter(dataset[:,0], dataset[:,1], s=10)\nplt.show()", "1. Implementar o algoritmo K-means\nNesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.\n1.1 Inicializar os centróides\nA primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.\nPara inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição. \n\nDica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html", "def calculate_initial_centers(dataset, k):\n \"\"\"\n Inicializa os centróides iniciais de maneira arbitrária \n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n k -- Número de centróides desejados\n \n Retornos:\n centroids -- Lista com os centróides calculados - [k,n]\n \"\"\"\n \n #### CODE HERE ####\n m = dataset.shape[0]\n \n centroids = list(dataset[np.random.randint(0, m - 1, 1)])\n \n for it1 in range(k - 1):\n max_dist = -1\n\n for it2 in range(m):\n nrst_cent_dist = sys.float_info.max\n\n for it3 in range(len(centroids)):\n dist = np.linalg.norm(dataset[it2] - centroids[it3])\n # Get the distance to the nearest centroid\n if (dist < nrst_cent_dist):\n nrst_cent_dist = dist\n nrst_cent = dataset[it2]\n\n if (nrst_cent_dist > max_dist):\n max_dist = nrst_cent_dist\n new_cent = nrst_cent\n\n centroids.append(new_cent)\n\n centroids = np.array(centroids)\n ### END OF CODE ###\n \n return centroids", "Teste a função criada e visualize os centróides que foram calculados.", "k = 3\ncentroids = calculate_initial_centers(dataset, k)\n\nplt.scatter(dataset[:,0], dataset[:,1], s=10)\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)\nplt.show()", "1.2 Definir os clusters\nNa segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.\n1.2.1 Função de distância\nCodifique a função de distância euclidiana entre dois pontos (a, b).\nDefinido pela equação:\n$$ dist(a, b) = \\sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$\n$$ dist(a, b) = \\sqrt{\\sum_{i=1}^{n}(a_i-b_i)^{2}} $$", "def euclidean_distance(a, b):\n \"\"\"\n Calcula a distância euclidiana entre os pontos a e b\n \n Argumentos:\n a -- Um ponto no espaço - [1,n]\n b -- Um ponto no espaço - [1,n]\n \n Retornos:\n distance -- Distância euclidiana entre os pontos\n \"\"\"\n \n #### CODE HERE ####\n n = len(a)\n \n distance = 0\n for i in range(n):\n distance = distance + (a[i] - b[i])**2\n \n distance = distance**0.5\n ### END OF CODE ###\n \n return distance", "Teste a função criada.", "a = np.array([1, 5, 9])\nb = np.array([3, 7, 8])\n\nif (euclidean_distance(a,b) == 3):\n print(\"Distância calculada corretamente!\")\nelse:\n print(\"Função de distância incorreta\")", "1.2.2 Calcular o centroide mais próximo\nUtilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer. \n\nDica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html", "def nearest_centroid(a, centroids):\n \"\"\"\n Calcula o índice do centroid mais próximo ao ponto a\n \n Argumentos:\n a -- Um ponto no espaço - [1,n]\n centroids -- Lista com os centróides - [k,n]\n \n Retornos:\n nearest_index -- Índice do centróide mais próximo\n \"\"\"\n \n #### CODE HERE ####\n # Check if centroids has two dimensions and, if not, convert to\n if len(centroids.shape) == 1:\n centroids = np.array([centroids])\n nrst_cent_dist = sys.float_info.max\n \n for j in range(len(centroids)):\n dist = euclidean_distance(a, centroids[j])\n if (dist < nrst_cent_dist):\n nrst_cent_dist = dist\n nearest_index = j\n ### END OF CODE ###\n \n return nearest_index", "Teste a função criada", "# Seleciona um ponto aleatório no dataset\nindex = np.random.randint(dataset.shape[0])\na = dataset[index,:]\n\n# Usa a função para descobrir o centroid mais próximo\nidx_nearest_centroid = nearest_centroid(a, centroids)\n\n\n# Plota os dados ------------------------------------------------\nplt.scatter(dataset[:,0], dataset[:,1], s=10)\n# Plota o ponto aleatório escolhido em uma cor diferente\nplt.scatter(a[0], a[1], c='magenta', s=30)\n\n# Plota os centroids\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)\n# Plota o centroid mais próximo com uma cor diferente\nplt.scatter(centroids[idx_nearest_centroid,0], \n centroids[idx_nearest_centroid,1],\n marker='^', c='springgreen', s=100)\n\n# Cria uma linha do ponto escolhido para o centroid selecionado\nplt.plot([a[0], centroids[idx_nearest_centroid,0]], \n [a[1], centroids[idx_nearest_centroid,1]],c='orange')\nplt.annotate('CENTROID', (centroids[idx_nearest_centroid,0], \n centroids[idx_nearest_centroid,1],))\nplt.show()", "1.2.3 Calcular centroid mais próximo de cada dado do dataset\nUtilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.", "def all_nearest_centroids(dataset, centroids):\n \"\"\"\n Calcula o índice do centroid mais próximo para cada \n ponto do dataset\n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n centroids -- Lista com os centróides - [k,n]\n \n Retornos:\n nearest_indexes -- Índices do centróides mais próximos - [m,1]\n \"\"\"\n \n #### CODE HERE ####\n # Check if centroids has two dimensions and, if not, convert to\n if len(centroids.shape) == 1:\n centroids = np.array([centroids])\n\n nearest_indexes = np.zeros(len(dataset))\n \n for i in range(len(dataset)):\n nearest_indexes[i] = nearest_centroid(dataset[i], centroids)\n ### END OF CODE ###\n \n return nearest_indexes", "Teste a função criada visualizando os cluster formados.", "nearest_indexes = all_nearest_centroids(dataset, centroids)\n\nplt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)\nplt.show()", "1.3 Métrica de avaliação\nApós formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.\nO algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.\n$$\\sum_{i=0}^{n}\\min_{c_j \\in C}(||x_i - c_j||^2)$$\nA inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes:\n\nA inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares.\nA inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos.\n\nFonte: https://scikit-learn.org/stable/modules/clustering.html\nPara podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente.\n$$inertia = \\sum_{i=0}^{n}\\min_{c_j \\in C} (dist(x_i, c_j))^2$$", "def inertia(dataset, centroids, nearest_indexes):\n \"\"\"\n Soma das distâncias quadradas das amostras para o \n centro do cluster mais próximo.\n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n centroids -- Lista com os centróides - [k,n]\n nearest_indexes -- Índices do centróides mais próximos - [m,1]\n \n Retornos:\n inertia -- Soma total do quadrado da distância entre \n os dados de um cluster e seu centróide\n \"\"\"\n \n #### CODE HERE ####\n # Check if centroids has two dimensions and, if not, convert to\n if len(centroids.shape) == 1:\n centroids = np.array([centroids])\n\n inertia = 0\n \n for i in range(len(dataset)):\n inertia = inertia + euclidean_distance(dataset[i], centroids[int(nearest_indexes[i])])**2\n ### END OF CODE ###\n \n return inertia", "Teste a função codificada executando o código abaixo.", "tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])\ntmp_centroide = np.array([[2,3,4]])\n\ntmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)\nif inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:\n print(\"Inertia calculada corretamente!\")\nelse:\n print(\"Função de inertia incorreta!\")\n\n# Use a função para verificar a inertia dos seus clusters\ninertia(dataset, centroids, nearest_indexes)", "1.4 Atualizar os clusters\nNessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.", "def update_centroids(dataset, centroids, nearest_indexes):\n \"\"\"\n Atualiza os centroids\n \n Argumentos:\n dataset -- Conjunto de dados - [m,n]\n centroids -- Lista com os centróides - [k,n]\n nearest_indexes -- Índices do centróides mais próximos - [m,1]\n \n Retornos:\n centroids -- Lista com centróides atualizados - [k,n]\n \"\"\"\n \n #### CODE HERE ####\n # Check if centroids has two dimensions and, if not, convert to\n if len(centroids.shape) == 1:\n centroids = np.array([centroids])\n \n sum_data_inCentroids = np.zeros((len(centroids), len(centroids[0])))\n num_data_inCentroids = np.zeros(len(centroids))\n \n for i in range(len(dataset)):\n cent_idx = int(nearest_indexes[i])\n sum_data_inCentroids[cent_idx] += dataset[i]\n num_data_inCentroids[cent_idx] += 1\n \n for i in range(len(centroids)):\n centroids[i] = sum_data_inCentroids[i]/num_data_inCentroids[i]\n ### END OF CODE ###\n \n return centroids", "Visualize os clusters formados", "nearest_indexes = all_nearest_centroids(dataset, centroids)\n\n# Plota os os cluster ------------------------------------------------\nplt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)\n\n# Plota os centroids\nplt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)\nfor index, centroid in enumerate(centroids):\n dataframe = dataset[nearest_indexes == index,:]\n for data in dataframe:\n plt.plot([centroid[0], data[0]], [centroid[1], data[1]], \n c='lightgray', alpha=0.3)\nplt.show()", "Execute a função de atualização e visualize novamente os cluster formados", "centroids = update_centroids(dataset, centroids, nearest_indexes)", "2. K-means\n2.1 Algoritmo completo\nUtilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!", "class KMeans():\n \n def __init__(self, n_clusters=8, max_iter=300):\n self.n_clusters = n_clusters\n self.max_iter = max_iter\n \n def fit(self,X):\n \n # Inicializa os centróides\n self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters)\n \n # Computa o cluster de cada amostra\n self.labels_ = all_nearest_centroids(X, self.cluster_centers_)\n \n # Calcula a inércia inicial\n old_inertia = inertia(X, self.cluster_centers_, self.labels_)\n self.inertia_ = old_inertia\n \n for index in range(self.max_iter):\n \n #### CODE HERE ####\n self.cluster_centers_ = update_centroids(X, self.cluster_centers_, self.labels_)\n self.labels_ = all_nearest_centroids(X, self.cluster_centers_)\n self.inertia_ = inertia(X, self.cluster_centers_, self.labels_)\n \n if (self.inertia_ == old_inertia):\n break\n else:\n old_inertia = self.inertia_\n ### END OF CODE ###\n \n return self\n \n def predict(self, X):\n \n return all_nearest_centroids(X, self.cluster_centers_)", "Verifique o resultado do algoritmo abaixo!", "kmeans = KMeans(n_clusters=3)\nkmeans.fit(dataset)\n\nprint(\"Inércia = \", kmeans.inertia_)\n\nplt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)\nplt.scatter(kmeans.cluster_centers_[:,0], \n kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)\nplt.show()", "2.2 Comparar com algoritmo do Scikit-Learn\nUse a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.\n\nDica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans", "#### CODE HERE ####\nfrom sklearn.cluster import KMeans as sk_KMeans\n\nskkmeans = sk_KMeans(n_clusters=3).fit(dataset)\n\nprint(\"Scikit-Learn KMeans' inertia: \", skkmeans.inertia_)\nprint(\"My KMeans inertia: \", kmeans.inertia_)", "3. Método do cotovelo\nImplemete o método do cotovelo e mostre o melhor K para o conjunto de dados.", "#### CODE HERE ####\n\n# Initialize array of Ks\nks = np.array(range(1, 11))\n# Create array to receive the inertias for each K\ninertias = np.zeros(len(ks))\n\nfor i in range(len(ks)):\n # Compute inertia for K\n kmeans = KMeans(ks[i]).fit(dataset)\n inertias[i] = kmeans.inertia_\n \n # Best K is the last one to improve the inertia in 30%\n if (i > 0 and (inertias[i - 1] - inertias[i])/inertias[i] > 0.3):\n best_k_idx = i\n\nprint(\"Best K: {}\\n\".format(ks[best_k_idx]))\nplt.plot(ks, inertias, marker='o')\nplt.plot(ks[best_k_idx], inertias[best_k_idx], 'ro')", "4. Dataset Real\nExercícios\n1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].\n\n[1] http://archive.ics.uci.edu/ml/datasets/iris\n[2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation\n\n\nDica: você pode utilizar as métricas completeness e homogeneity.\n\n2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida. \n\nDica: você pode tentar normalizar os dados [3].\n- [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html\n\n3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means.\n4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.", "#### CODE HERE ####" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
johntanz/ROP
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
gpl-2.0
[ "Masimo Analysis\nFor Pulse Ox. Analysis, make sure the data file is the right .csv format:\na) Headings on Row 1\nb) Open the csv file through Notepad or TextEdit and delete extra \nrow commas (non-printable characters)\nc) There are always Dates in Column A and Time in Column B. \nd) There might be a row that says \"Time Gap Present\". Delete this row from Notepad \nor TextEdit", "#the usual beginning\nimport pandas as pd\nimport numpy as np\nfrom pandas import Series, DataFrame\nfrom datetime import datetime, timedelta\nfrom pandas import concat\n\n#define any string with 'C' as NaN\ndef readD(val):\n if 'C' in val:\n return np.nan\n return val", "Import File into Python\nChange File Name!", "df = pd.read_csv('/Users/John/Dropbox/LLU/ROP/Pulse Ox/ROP018PO.csv',\n parse_dates={'timestamp': ['Date','Time']},\n index_col='timestamp',\n usecols=['Date', 'Time', 'SpO2', 'PR', 'PI', 'Exceptions'],\n na_values=['0'],\n converters={'Exceptions': readD}\n )\n\n#parse_dates tells the read_csv function to combine the date and time column \n#into one timestamp column and parse it as a timestamp.\n# pandas is smart enough to know how to parse a date in various formats\n\n#index_col sets the timestamp column to be the index.\n\n#usecols tells the read_csv function to select only the subset of the columns.\n#na_values is used to turn 0 into NaN\n\n#converters: readD is the dict that means any string with 'C' with be NaN (for PI)\n\n#dfclean = df[27:33][df[27:33].loc[:, ['SpO2', 'PR', 'PI', 'Exceptions']].apply(pd.notnull).all(1)]\n#clean the dataframe to get rid of rows that have NaN for PI purposes\ndf_clean = df[df.loc[:, ['PI', 'Exceptions']].apply(pd.notnull).all(1)]\n\n\"\"\"Pulse ox date/time is 1 mins and 32 seconds faster than phone. Have to correct for it.\"\"\"\n\nTC = timedelta(minutes=1, seconds=32)", "Set Date and Time of ROP Exam and Eye Drops", "df_first = df.first_valid_index() #get the first number from index\n\nY = pd.to_datetime(df_first) #convert index to datetime\n# Y = TIME DATA COLLECTION BEGAN / First data point on CSV\n\n# SYNTAX: \n# datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])\n\nW = datetime(2016, 1, 20, 7, 30)+TC\n# W = first eye drop dtarts\nX = datetime(2016, 1, 20, 8, 42)+TC\n# X = ROP Exam Started\nZ = datetime(2016, 1, 20, 8, 46)+TC\n# Z = ROP Exam Ended\n\ndf_last = df.last_valid_index() #get the last number from index\n\nQ = pd.to_datetime(df_last) \n\n# Q = TIME DATA COLLECTION ENDED / Last Data point on CSV", "Baseline Averages", "avg0PI = df_clean.PI[Y:W].mean()\navg0O2 = df.SpO2[Y:W].mean()\navg0PR = df.PR[Y:W].mean()\n\nprint 'Baseline Averages\\n', 'PI :\\t',avg0PI, '\\nSpO2 :\\t',avg0O2,'\\nPR :\\t',avg0PR,\n#df.std() for standard deviation", "Average q 5 Min for 1 hour after 1st Eye Drops", "# Every 5 min Average from start of eye drops to start of exam\n\ndef perdeltadrop(start, end, delta):\n rdrop = []\n curr = start\n while curr < end:\n rdrop.append(curr)\n curr += delta\n return rdrop\n \ndfdropPI = df_clean.PI[W:W+timedelta(hours=1)]\ndfdropO2 = df.SpO2[W:W+timedelta(hours=1)]\ndfdropPR = df.PR[W:W+timedelta(hours=1)]\nwindrop = timedelta(minutes=5)#make the range\nrdrop = perdeltadrop(W, W+timedelta(minutes=15), windrop)\n\navgdropPI = Series(index = rdrop, name = 'PI DurEyeD')\navgdropO2 = Series(index = rdrop, name = 'SpO2 DurEyeD')\navgdropPR = Series(index = rdrop, name = 'PR DurEyeD')\n\nfor i in rdrop:\n avgdropPI[i] = dfdropPI[i:(i+windrop)].mean()\n avgdropO2[i] = dfdropO2[i:(i+windrop)].mean()\n avgdropPR[i] = dfdropPR[i:(i+windrop)].mean()\n \nresultdrops = concat([avgdropPI, avgdropO2, avgdropPR], axis=1, join='inner')\nprint resultdrops\n", "Average Every 10 Sec During ROP Exam for first 4 minutes", "#AVERAGE DURING ROP EXAM FOR FIRST FOUR MINUTES\ndef perdelta1(start, end, delta):\n r1 = []\n curr = start\n while curr < end:\n r1.append(curr)\n curr += delta\n return r1\n\ndf1PI = df_clean.PI[X:X+timedelta(minutes=4)]\ndf1O2 = df.SpO2[X:X+timedelta(minutes=4)]\ndf1PR = df.PR[X:X+timedelta(minutes=4)]\nwin1 = timedelta(seconds=10) #any unit of time & make the range\n\nr1 = perdelta1(X, X+timedelta(minutes=4), win1)\n\n#make the series to store\navg1PI = Series(index = r1, name = 'PI DurEx')\navg1O2 = Series(index = r1, name = 'SpO2 DurEx')\navg1PR = Series(index = r1, name = 'PR DurEX')\n#average!\nfor i1 in r1:\n avg1PI[i1] = df1PI[i1:(i1+win1)].mean()\n avg1O2[i1] = df1O2[i1:(i1+win1)].mean()\n avg1PR[i1] = df1PR[i1:(i1+win1)].mean()\n\nresult1 = concat([avg1PI, avg1O2, avg1PR], axis=1, join='inner')\nprint result1\n", "Average Every 5 Mins Hour 1-2 After ROP Exam", "#AVERAGE EVERY 5 MINUTES ONE HOUR AFTER ROP EXAM\n\ndef perdelta2(start, end, delta):\n r2 = []\n curr = start\n while curr < end:\n r2.append(curr)\n curr += delta\n return r2\n\n# datetime(year, month, day, hour, etc.)\n\ndf2PI = df_clean.PI[Z:(Z+timedelta(hours=1))]\ndf2O2 = df.SpO2[Z:(Z+timedelta(hours=1))]\ndf2PR = df.PR[Z:(Z+timedelta(hours=1))]\nwin2 = timedelta(minutes=5) #any unit of time, make the range\n\nr2 = perdelta2(Z, (Z+timedelta(hours=1)), win2) #define the average using function\n\n#make the series to store\navg2PI = Series(index = r2, name = 'PI q5MinHr1')\navg2O2 = Series(index = r2, name = 'O2 q5MinHr1')\navg2PR = Series(index = r2, name = 'PR q5MinHr1')\n\n#average!\nfor i2 in r2:\n avg2PI[i2] = df2PI[i2:(i2+win2)].mean()\n avg2O2[i2] = df2O2[i2:(i2+win2)].mean()\n avg2PR[i2] = df2PR[i2:(i2+win2)].mean()\n\nresult2 = concat([avg2PI, avg2O2, avg2PR], axis=1, join='inner')\nprint result2", "Average Every 15 Mins Hour 2-3 After ROP Exam", "#AVERAGE EVERY 15 MINUTES TWO HOURS AFTER ROP EXAM\n\ndef perdelta3(start, end, delta):\n r3 = []\n curr = start\n while curr < end:\n r3.append(curr)\n curr += delta\n return r3\n\n# datetime(year, month, day, hour, etc.)\n\ndf3PI = df_clean.PI[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]\ndf3O2 = df.SpO2[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]\ndf3PR = df.PR[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))]\nwin3 = timedelta(minutes=15) #any unit of time, make the range\n\nr3 = perdelta3((Z+timedelta(hours=1)), (Z+timedelta(hours=2)), win3)\n\n#make the series to store\navg3PI = Series(index = r3, name = 'PI q15MinHr2')\navg3O2 = Series(index = r3, name = 'O2 q15MinHr2')\navg3PR = Series(index = r3, name = 'PR q15MinHr2')\n\n#average!\nfor i3 in r3:\n avg3PI[i3] = df3PI[i3:(i3+win3)].mean()\n avg3O2[i3] = df3O2[i3:(i3+win3)].mean()\n avg3PR[i3] = df3PR[i3:(i3+win3)].mean()\n \nresult3 = concat([avg3PI, avg3O2, avg3PR], axis=1, join='inner')\nprint result3\n", "Average Every 30 Mins Hour 3-4 After ROP Exam", "#AVERAGE EVERY 30 MINUTES THREE HOURS AFTER ROP EXAM\n\ndef perdelta4(start, end, delta):\n r4 = []\n curr = start\n while curr < end:\n r4.append(curr)\n curr += delta\n return r4\n\n# datetime(year, month, day, hour, etc.)\n\ndf4PI = df_clean.PI[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]\ndf4O2 = df.SpO2[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]\ndf4PR = df.PR[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))]\nwin4 = timedelta(minutes=30) #any unit of time, make the range\n\nr4 = perdelta4((Z+timedelta(hours=2)), (Z+timedelta(hours=3)), win4)\n\n#make the series to store\navg4PI = Series(index = r4, name = 'PI q30MinHr3')\navg4O2 = Series(index = r4, name = 'O2 q30MinHr3')\navg4PR = Series(index = r4, name = 'PR q30MinHr3')\n\n#average!\nfor i4 in r4:\n avg4PI[i4] = df4PI[i4:(i4+win4)].mean()\n avg4O2[i4] = df4O2[i4:(i4+win4)].mean()\n avg4PR[i4] = df4PR[i4:(i4+win4)].mean()\n \nresult4 = concat([avg4PI, avg4O2, avg4PR], axis=1, join='inner')\nprint result4\n", "Average Every Hour 4-24 Hours Post ROP Exam", "#AVERAGE EVERY 60 MINUTES 4-24 HOURS AFTER ROP EXAM\n\ndef perdelta5(start, end, delta):\n r5 = []\n curr = start\n while curr < end:\n r5.append(curr)\n curr += delta\n return r5\n\n# datetime(year, month, day, hour, etc.)\n\ndf5PI = df_clean.PI[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]\ndf5O2 = df.SpO2[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]\ndf5PR = df.PR[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))]\nwin5 = timedelta(minutes=60) #any unit of time, make the range\n\nr5 = perdelta5((Z+timedelta(hours=3)), (Z+timedelta(hours=24)), win5)\n\n#make the series to store\navg5PI = Series(index = r5, name = 'PI q60MinHr4+')\navg5O2 = Series(index = r5, name = 'O2 q60MinHr4+')\navg5PR = Series(index = r5, name = 'PR q60MinHr4+')\n\n#average!\nfor i5 in r5:\n avg5PI[i5] = df5PI[i5:(i5+win5)].mean()\n avg5O2[i5] = df5O2[i5:(i5+win5)].mean()\n avg5PR[i5] = df5PR[i5:(i5+win5)].mean()\n\nresult5 = concat([avg5PI, avg5O2, avg5PR], axis=1, join='inner')\nprint result5\n", "Mild, Moderate, and Severe Desaturation Events", "df_O2_pre = df[Y:W]\n\n\n#Find count of these ranges\nbelow = 0 # v <=80\nmiddle = 0 #v >= 81 and v<=84\nabove = 0 #v >=85 and v<=89\nls = []\n\nb_dict = {}\nm_dict = {}\na_dict = {}\n\nfor i, v in df_O2_pre['SpO2'].iteritems():\n \n if v <= 80: #below block\n \n if not ls: \n ls.append(v)\n else:\n if ls[0] >= 81: #if the range before was not below 80\n\n if len(ls) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2\n\n if ls[0] <= 84: #was it in the middle range?\n m_dict[middle] = ls\n middle += 1\n ls = [v]\n elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?\n a_dict[above] = ls\n above += 1\n ls = [v]\n\n else: #old list wasn't long enough to count\n ls = [v]\n else: #if in the same range\n ls.append(v)\n \n elif v >= 81 and v<= 84: #middle block\n \n if not ls:\n ls.append(v)\n else:\n if ls[0] <= 80 or (ls[0]>=85 and ls[0]<= 89): #if not in the middle range\n if len(ls) >= 5: #if range was greater than 10 seconds\n\n if ls[0] <= 80: #was it in the below range?\n b_dict[below] = ls\n below += 1\n ls = [v]\n elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?\n a_dict[above] = ls\n above += 1\n ls = [v]\n else: #old list wasn't long enough to count\n ls = [v]\n\n else:\n ls.append(v)\n \n elif v >= 85 and v <=89: #above block\n \n if not ls:\n ls.append(v)\n else:\n if ls[0] <=84 : #if not in the above range\n\n if len(ls) >= 5: #if range was greater than \n if ls[0] <= 80: #was it in the below range?\n b_dict[below] = ls\n below += 1\n ls = [v]\n elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?\n m_dict[middle] = ls\n middle += 1\n ls = [v]\n else: #old list wasn't long enough to count\n ls = [v]\n else:\n ls.append(v)\n \n else: #v>90 or something else weird. start the list over\n ls = []\n#final list check\nif len(ls) >= 5:\n if ls[0] <= 80: #was it in the below range?\n b_dict[below] = ls\n below += 1\n ls = [v]\n elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range?\n m_dict[middle] = ls\n middle += 1\n ls = [v]\n elif ls[0] >= 85 and ls[0] <=89: #was it in the above range?\n a_dict[above] = ls\n above += 1\n \nb_len = 0.0\nfor key, val in b_dict.iteritems():\n b_len += len(val)\n\nm_len = 0.0\nfor key, val in m_dict.iteritems():\n m_len += len(val)\n \na_len = 0.0\nfor key, val in a_dict.iteritems():\n a_len += len(val)\n \n\n \n\n #post exam duraiton length analysis\ndf_O2_post = df[Z:Q]\n\n\n#Find count of these ranges\nbelow2 = 0 # v <=80\nmiddle2= 0 #v >= 81 and v<=84\nabove2 = 0 #v >=85 and v<=89\nls2 = []\n\nb_dict2 = {}\nm_dict2 = {}\na_dict2 = {}\n\nfor i2, v2 in df_O2_post['SpO2'].iteritems():\n \n if v2 <= 80: #below block\n \n if not ls2: \n ls2.append(v2)\n else:\n if ls2[0] >= 81: #if the range before was not below 80\n\n if len(ls2) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2\n\n if ls2[0] <= 84: #was it in the middle range?\n m_dict2[middle2] = ls2\n middle2 += 1\n ls2 = [v2]\n elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?\n a_dict2[above2] = ls2\n above2 += 1\n ls2 = [v2]\n\n else: #old list wasn't long enough to count\n ls2 = [v2]\n else: #if in the same range\n ls2.append(v2)\n \n elif v2 >= 81 and v2<= 84: #middle block\n \n if not ls2:\n ls2.append(v2)\n else:\n if ls2[0] <= 80 or (ls2[0]>=85 and ls2[0]<= 89): #if not in the middle range\n if len(ls2) >= 5: #if range was greater than 10 seconds\n\n if ls2[0] <= 80: #was it in the below range?\n b_dict2[below2] = ls2\n below2 += 1\n ls2 = [v2]\n elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?\n a_dict2[above2] = ls2\n above2 += 1\n ls2 = [v2]\n else: #old list wasn't long enough to count\n ls2 = [v2]\n\n else:\n ls2.append(v2)\n \n elif v2 >= 85 and v2 <=89: #above block\n \n if not ls2:\n ls2.append(v2)\n else:\n if ls2[0] <=84 : #if not in the above range\n\n if len(ls2) >= 5: #if range was greater than \n if ls2[0] <= 80: #was it in the below range?\n b_dict2[below2] = ls2\n below2 += 1\n ls2 = [v2]\n elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?\n m_dict2[middle2] = ls2\n middle2 += 1\n ls2 = [v2]\n else: #old list wasn't long enough to count\n ls2 = [v2]\n else:\n ls2.append(v2)\n \n else: #v2>90 or something else weird. start the list over\n ls2 = []\n#final list check\nif len(ls2) >= 5:\n if ls2[0] <= 80: #was it in the below range?\n b_dict2[below2] = ls2\n below2 += 1\n ls2= [v2]\n elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range?\n m_dict2[middle2] = ls2\n middle2 += 1\n ls2 = [v2]\n elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range?\n a_dict2[above2] = ls2\n above2 += 1\n \nb_len2 = 0.0\nfor key, val2 in b_dict2.iteritems():\n b_len2 += len(val2)\n\nm_len2 = 0.0\nfor key, val2 in m_dict2.iteritems():\n m_len2 += len(val2)\n \na_len2 = 0.0\nfor key, val2 in a_dict2.iteritems():\n a_len2 += len(val2)\n\n#print results from count and min\n\nprint \"Desat Counts for X mins\\n\" \nprint \"Pre Mild Desat (85-89) Count: %s\\t\" %above, \"for %s min\" %((a_len*2)/60.)\nprint \"Pre Mod Desat (81-84) Count: %s\\t\" %middle, \"for %s min\" %((m_len*2)/60.) \nprint \"Pre Sev Desat (=< 80) Count: %s\\t\" %below, \"for %s min\\n\" %((b_len*2)/60.)\n\nprint \"Post Mild Desat (85-89) Count: %s\\t\" %above2, \"for %s min\" %((a_len2*2)/60.) \nprint \"Post Mod Desat (81-84) Count: %s\\t\" %middle2, \"for %s min\" %((m_len2*2)/60.) \nprint \"Post Sev Desat (=< 80) Count: %s\\t\" %below2, \"for %s min\\n\" %((b_len2*2)/60.) \n\n\n\nprint \"Data Recording Time!\"\nprint '*' * 10\nprint \"Pre-Exam Data Recording Length\\t\", X - Y # start of exam - first data point\nprint \"Post-Exam Data Recording Length\\t\", Q - Z #last data point - end of exam\nprint \"Total Data Recording Length\\t\", Q - Y #last data point - first data point\n\nPre = ['Pre',(X-Y)]\nPost = ['Post',(Q-Z)]\nTotal = ['Total',(Q-Y)]\nRTL = [Pre, Post, Total]\n\nPreMild = ['Pre Mild Desats \\t',(above), 'for', (a_len*2)/60., 'mins']\nPreMod = ['Pre Mod Desats \\t',(middle), 'for', (m_len*2)/60., 'mins']\nPreSev = ['Pre Sev Desats \\t',(below), 'for', (b_len*2)/60., 'mins']\nPreDesats = [PreMild, PreMod, PreSev]\n\nPostMild = ['Post Mild Desats \\t',(above2), 'for', (a_len2*2)/60., 'mins']\nPostMod = ['Post Mod Desats \\t',(middle2), 'for', (m_len2*2)/60., 'mins']\nPostSev = ['Post Sev Desats \\t',(below2), 'for', (b_len2*2)/60., 'mins']\nPostDesats = [PostMild, PostMod, PostSev]\n\n#creating a list for recording time length\n\n#did it count check sort correctly? get rid of the ''' if you want to check your values\n'''\nprint \"Mild check\"\nfor key, val in b_dict.iteritems():\n print all(i <=80 for i in val)\n\nprint \"Moderate check\"\nfor key, val in m_dict.iteritems():\n print all(i >= 81 and i<=84 for i in val)\n \nprint \"Severe check\"\nfor key, val in a_dict.iteritems():\n print all(i >= 85 and i<=89 for i in val)\n'''", "Export to CSV", "import csv\nclass excel_tab(csv.excel):\n delimiter = '\\t'\ncsv.register_dialect(\"excel_tab\", excel_tab)\n\nwith open('ROP018_PO.csv', 'w') as f: #CHANGE CSV FILE NAME, saves in same directory\n writer = csv.writer(f, dialect=excel_tab)\n #writer.writerow(['PI, O2, PR']) accidently found this out but using commas = gives me columns YAY! fix this\n #to make code look nice ok nice\n writer.writerow([avg0PI, ',PI Start'])\n for i in rdrop:\n writer.writerow([avgdropPI[i]]) #NEEDS BRACKETS TO MAKE IT SEQUENCE\n for i in r1:\n writer.writerow([avg1PI[i]])\n for i in r2:\n writer.writerow([avg2PI[i]])\n for i in r3:\n writer.writerow([avg3PI[i]])\n for i in r4:\n writer.writerow([avg4PI[i]])\n for i in r5:\n writer.writerow([avg5PI[i]])\n writer.writerow([avg0O2, ',SpO2 Start'])\n for i in rdrop:\n writer.writerow([avgdropO2[i]])\n for i in r1:\n writer.writerow([avg1O2[i]])\n for i in r2:\n writer.writerow([avg2O2[i]])\n for i in r3:\n writer.writerow([avg3O2[i]])\n for i in r4:\n writer.writerow([avg4O2[i]])\n for i in r5:\n writer.writerow([avg5O2[i]])\n writer.writerow([avg0PR, ',PR Start'])\n for i in rdrop:\n writer.writerow([avgdropPR[i]])\n for i in r1:\n writer.writerow([avg1PR[i]])\n for i in r2:\n writer.writerow([avg2PR[i]])\n for i in r3:\n writer.writerow([avg3PR[i]])\n for i in r4:\n writer.writerow([avg4PR[i]])\n for i in r5:\n writer.writerow([avg5PR[i]])\n writer.writerow(['Data Recording Time Length'])\n writer.writerows(RTL)\n writer.writerow(['Pre Desat Counts for X Minutes'])\n writer.writerows(PreDesats)\n writer.writerow(['Post Dest Counts for X Minutes'])\n writer.writerows(PostDesats)\n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
liulixiang1988/documents
Python数据科学101.ipynb
mit
[ "Python数据科学101\n1. 配置系统\n\nPython\nJDK\n创建C:\\Hadoop\\bin\n在这里下载windows版的hadoop https://github.com/steveloughran/winutils 拷贝winutils到C:\\Hadoop\\bin下面\n创建HADOOP_HOME环境变量,指向C:\\Hadoop\n创建C:\\temp\\hive文件夹\n运行c:\\hadoop\\bin\\winutils chmod 777 \\temp\\hive\n下载Spark: https://spark.apache.org/downloads.html\n解压下载的Spark的文件到C:\\SPARK目录下,其它操作系统的放到home目录\n创建SPARK_HOME,指向C:\\SPARK\n运行c:\\spark\\bin\\spark-shell看看是否安装成功\n\n2. 使用Python\n\n安装Anaconda\n检查conda: conda --version\n检查安装的包: conda list\n升级: conda update conda\n\n3. 实验环境\n输入jupyter notebook", "%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0, 3*np.pi, 500)\nplt.plot(x, np.sin(x**2))\nplt.title('Sine wave')", "4. Pandas简介\n最重要的是DataFrame和Series", "import numpy as np\nimport pandas as pd", "4.1 Series\n创建一个series,包含空值NaN", "s = pd.Series([1, 3, 5, np.nan, 6, 8])\ns[4] # 6.0", "4.2 Dataframes", "df = pd.DataFrame({'data': ['2016-01-01', '2016-01-02', '2016-01-03'], 'qty': [20, 30, 40]})\ndf", "更大的数据应当从文件里获取", "rain = pd.read_csv('data/rainfall/rainfall.csv')\nrain\n\n# 加载一列\nrain['City']\n\n# 加载一行(第二行)\nrain.loc[[1]]\n\n# 第一行和第二行\nrain.loc[0:1]", "4.3 过滤", "# 查找所有降雨量小于10的数据\nrain[rain['Rainfall'] < 10]", "查找4月份的降雨", "rain[rain['Month'] == 'Apr']", "查找Los Angeles的数据", "rain[rain['City'] == 'Los Angeles']", "4.4 给行起名(Naming Rows)", "rain = rain.set_index(rain['City'] + rain['Month'])", "注意,当我们修改dataframe时,其实是在创建一个副本,因此要把这个值再赋值给原有的dataframe", "rain.loc['San FranciscoApr']", "5. Pandas 例子", "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\ndf = pd.read_csv('data/nycflights13/flights.csv.gz')\n\ndf", "这里我们主要关注统计数据和可视化。我们来看一下按月统计的晚点时间的均值。", "mean_delay_by_month = df.groupby(['month'])['arr_delay'].mean()\nmean_delay_by_month\n\nmean_month_plt = mean_delay_by_month.plot(kind='bar', title='Mean Delay By Month')\nmean_month_plt", "注意,这里9、10月均值会有负值。", "mean_delay_by_month_ord = df[(df.dest == 'ORD')].groupby(['month'])['arr_delay'].mean()\nprint(\"Flights to Chicago (ORD)\")\nprint(mean_delay_by_month_ord)\n\nmean_month_plt_ord = mean_delay_by_month_ord.plot(kind='bar', title=\"Mean Delay By Month (Chicago)\")\nmean_month_plt_ord\n\n# 再看看Los Angeles进行比较一下\nmean_delay_by_month_lax = df[(df.dest == 'LAX')].groupby(['month'])['arr_delay'].mean()\nprint(\"Flights to Chicago (LAX)\")\nprint(mean_delay_by_month_lax)\n\nmean_month_plt_lax = mean_delay_by_month_lax.plot(kind='bar', title=\"Mean Delay By Month (Los Angeles)\")\nmean_month_plt_lax", "从上面的图表中我们可以直观的看到一些特征。现在我们再来看看每个航空公司晚点的情况,并进行一些可视化。", "# 看看是否不同的航空公司对晚点会有不同的影响\ndf[['carrier', 'arr_delay']].groupby('carrier').mean().plot(kind='bar', figsize=(12, 8))\nplt.xticks(rotation=0)\nplt.xlabel('Carrier')\nplt.ylabel('Average Delay in Min')\nplt.title('Average Arrival Delay by Carrier in 2008, All airports')\n\ndf[['carrier', 'dep_delay']].groupby('carrier').mean().plot(kind='bar', figsize=(12, 8))\nplt.xticks(rotation=0)\nplt.xlabel('Carrier')\nplt.ylabel('Average Delay in Min')\nplt.title('Average Departure Delay by Carrier in 2008, All airports')", "从上面的图表里我们可以看到F9(Front Airlines)几乎是最经常晚点的,而夏威夷(HA)在这方面表现最好。\n5.3 Joins\n我们有多个数据集,天气、机场的。现在我们来看一下如何把两个表连接在一起", "weather = pd.read_csv('data/nycflights13/weather.csv.gz')\nweather\n\ndf_withweather = pd.merge(df, weather, how='left', on=['year', 'month', 'day', 'hour'])\ndf_withweather\n\nairports = pd.read_csv('data/nycflights13/airports.csv.gz')\nairports\n\ndf_withairport = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa')\ndf_withairport", "6 Numpy和SciPy\nNumpy和SciPy是Python数据科学的CP。早期Python的list比较慢,并且对于处理矩阵和向量运算不太好,因此有了Numpy来解决这个问题。它引入了array-type的数据类型。\n创建数组:", "import numpy as np\na = np.array([1, 2, 3])\na", "注意这里我们传的是列表,而不是np.array(1, 2, 3)。\n现在我们创建一个arange", "np.arange(10)\n\n# 给序列乘以一个系数\nnp.arange(10) * np.pi", "我们也可以使用shape方法从一维数组创建多维数组", "a = np.array([1, 2, 3, 4, 5, 6])\na.shape = (2, 3)\na", "6.1 矩阵Matrix", "np.matrix('1 2; 3 4')\n\n#矩阵乘\na1 = np.matrix('1 2; 3 4')\na2 = np.matrix('3 4; 5 7')\na1 * a2\n\n#array转换为矩阵\nmat_a = np.mat(a1)\nmat_a", "6.2 稀疏矩阵(Sparse Matrices)", "import numpy, scipy.sparse\nn = 100000\nx = (numpy.random.rand(n) * 2).astype(int).astype(float) #50%稀疏矩阵\nx_csr = scipy.sparse.csr_matrix(x)\nx_dok = scipy.sparse.dok_matrix(x.reshape(x_csr.shape))\nx_dok", "6.3 从CSV文件中加载数据", "import csv\nwith open('data/array/array.csv', 'r') as csvfile:\n csvreader = csv.reader(csvfile)\n data = []\n for row in csvreader:\n row = [float(x) for x in row]\n data.append(row)\ndata", "6.4 求解矩阵方程(Solving a matrix)", "import numpy as np\nimport scipy as sp\na = np.array([[3, 2, 0], [1, -1, 0], [0, 5, 1]])\nb = np.array([2, 4, -1])\nx = np.linalg.solve(a, b)\nx\n\n#检查结果是否正确\nnp.dot(a, x) == b", "7 Scikit-learn 简介\n前面我们介绍了pandas和numpy、scipy。现在我们来介绍python机器库Scikit。首先需要先知道机器学习的两种:\n\n监督学习(Supervised Learning): 从训练集建立模型进行预测\n非监督学习(Unsupervised Learning): 从数据中推测模型,比如从文本中找出主题\n\nScikit-learn有一下特性:\n- 预处理(Preprocessing):为机器学习reshape数据\n- 降维处理(Dimensionality reduction):减少变量的重复\n- 分类(Classification): 预测分类\n- 回归(regression):预测连续变量\n- 聚类(Clustering):从数据中发现自然的模式\n- 模型选取(Model Selection):为数据找到最优模型\n这里我们还是看nycflights13的数据集。", "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\n\nflights = pd.read_csv('data/nycflights13/flights.csv.gz')\nweather = pd.read_csv('data/nycflights13/weather.csv.gz')\nairports = pd.read_csv('data/nycflights13/airports.csv.gz')\n\ndf_withweather = pd.merge(flights, weather, how='left', on=['year', 'month', 'day', 'hour'])\ndf = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa')\n\ndf = df.dropna()\ndf", "7.1 特征向量", "pred = 'dep_delay'\nfeatures = ['month', 'day', 'dep_time', 'arr_time', 'carrier', 'dest', 'air_time',\n 'distance', 'lat', 'lon', 'alt', 'dewp', 'humid', 'wind_speed', 'wind_gust',\n 'precip', 'pressure', 'visib']\nfeatures_v = df[features]\npred_v = df[pred]\n\npd.options.mode.chained_assignment = None #default='warn'\n\n# 因为航空公司不是一个数字,我们把它转化为数字哑变量\nfeatures_v['carrier'] = pd.factorize(features_v['carrier'])[0]\n\n# dest也不是一个数字,我们也把它转为数字\nfeatures_v['dest'] = pd.factorize(features_v['dest'])[0]\n\nfeatures_v", "7.2 对特征向量进行标准化(Scaling the feature vector)", "# 因为各个特征的维度各不相同,我们需要做标准化\nscaler = StandardScaler()\nscaled_features = scaler.fit_transform(features_v)\n\nscaled_features", "7.3 特征降维(Reducing Dimensions)\n我们使用PCA(Principle Component Analysis主成分析)把特征降维为2个", "from sklearn.decomposition import PCA\n\npca = PCA(n_components=2)\nX_r = pca.fit(scaled_features).transform(scaled_features)\n\nX_r", "7.4 画图(Plotting)", "import matplotlib.pyplot as plt\n\nprint('explained variance ratio (first two components): %s' \n % str(pca.explained_variance_ratio_))\n\nplt.figure()\nlw = 2\n\nplt.scatter(X_r[:,0], X_r[:,1], alpha=.8, lw=lw)\nplt.title('PCA of flights dataset')", "8 构建分类器(Build a classifier)\n我们来预测一个航班是否会晚点", "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sklearn\nfrom sklearn import linear_model, cross_validation, metrics, svm, ensemble\nfrom sklearn.metrics import classification_report, confusion_matrix, precision_recall_fscore_support, accuracy_score\nfrom sklearn.cross_validation import train_test_split, cross_val_score, ShuffleSplit\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\n\nflights = pd.read_csv('data/nycflights13/flights.csv.gz')\nweather = pd.read_csv('data/nycflights13/weather.csv.gz')\nairports = pd.read_csv('data/nycflights13/airports.csv.gz')\n\ndf_withweather = pd.merge(flights, weather, how='left', on=['year', 'month', 'day', 'hour'])\ndf = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa')\n\ndf = df.dropna()\n\ndf\n\npred = 'dep_delay'\nfeatures = ['month', 'day', 'dep_time', 'arr_time', 'carrier', 'dest', 'air_time',\n 'distance', 'lat', 'lon', 'alt', 'dewp', 'humid', 'wind_speed', 'wind_gust',\n 'precip', 'pressure', 'visib']\nfeatures_v = df[features]\npred_v = df[pred]\n\nhow_late_is_late = 15.0\n\npd.options.mode.chained_assignment = None #default='warn'\n\n\n# 因为航空公司不是一个数字,我们把它转化为数字哑变量\nfeatures_v['carrier'] = pd.factorize(features_v['carrier'])[0]\n\n# dest也不是一个数字,我们也把它转为数字\nfeatures_v['dest'] = pd.factorize(features_v['dest'])[0]\n\nscaler = StandardScaler()\nscaled_features_v = scaler.fit_transform(features_v)\n\nfeatures_train, features_test, pred_train, pred_test = train_test_split(\n scaled_features_v, pred_v, test_size=0.30, random_state=0)\n\n# 使用logistic回归来执行分类\n\nclf_lr = sklearn.linear_model.LogisticRegression(penalty='l2',\n class_weight='balanced')\nlogistic_fit = clf_lr.fit(features_train, np.where(pred_train >= how_late_is_late, 1, 0))\n\npredictions = clf_lr.predict(features_test)\n\n# summary Report\n\n# Confusion Matrix\ncm_lr = confusion_matrix(np.where(pred_test >= how_late_is_late, 1, 0),\n predictions)\nprint(\"Confusion Matrix\")\nprint(pd.DataFrame(cm_lr))\n\n# 获取精确值\nreport_lr = precision_recall_fscore_support(\n list(np.where(pred_test >= how_late_is_late, 1, 0)),\n list(predictions), average='binary')\n\n#打印精度值\nprint(\"\\nprecision = %0.2f, recall = %0.2f, F1 = %0.2f, accuracy = %0.2f\"\n % (report_lr[0], report_lr[1], report_lr[2],\n accuracy_score(list(np.where(pred_test >= how_late_is_late, 1, 0)),\n list(predictions))))", "9 聚合数据(Cluster data)\n最简单的聚类方法是K-Means", "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sklearn\nfrom sklearn.cluster import KMeans\nfrom sklearn import linear_model, cross_validation, cluster\nfrom sklearn.metrics import classification_report, confusion_matrix, precision_recall_fscore_support, accuracy_score\nfrom sklearn.cross_validation import train_test_split, cross_val_score, ShuffleSplit\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\n\nflights = pd.read_csv('data/nycflights13/flights.csv.gz')\nweather = pd.read_csv('data/nycflights13/weather.csv.gz')\nairports = pd.read_csv('data/nycflights13/airports.csv.gz')\n\ndf_withweather = pd.merge(flights, weather, how='left', on=['year', 'month', 'day', 'hour'])\ndf = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa')\n\ndf = df.dropna()\n\npred = 'dep_delay'\nfeatures = ['month', 'day', 'dep_time', 'arr_time', 'carrier', 'dest', 'air_time',\n 'distance', 'lat', 'lon', 'alt', 'dewp', 'humid', 'wind_speed', 'wind_gust',\n 'precip', 'pressure', 'visib']\nfeatures_v = df[features]\npred_v = df[pred]\n\nhow_late_is_late = 15.0\n\npd.options.mode.chained_assignment = None #default='warn'\n\n# 因为航空公司不是一个数字,我们把它转化为数字哑变量\nfeatures_v['carrier'] = pd.factorize(features_v['carrier'])[0]\n\n# dest也不是一个数字,我们也把它转为数字\nfeatures_v['dest'] = pd.factorize(features_v['dest'])[0]\n\nscaler = StandardScaler()\nscaled_features_v = scaler.fit_transform(features_v)\n\nfeatures_train, features_test, pred_train, pred_test = train_test_split(\n scaled_features_v, pred_v, test_size=0.30, random_state=0)\n\ncluster = sklearn.cluster.KMeans(n_clusters=8, init='k-means++', n_init=10, max_iter=300, tol=0.0001, precompute_distances='auto', random_state=None, verbose=0)\ncluster.fit(features_train)\n\n# 预测测试数据\nresult = cluster.predict(features_test)\n\nresult\n\nimport matplotlib.pyplot as plt\nfrom sklearn.decomposition import PCA\n\nreduced_data = PCA(n_components=2).fit_transform(features_train)\nkmeans = KMeans(init='k-means++', n_clusters=8, n_init=10)\nkmeans.fit(reduced_data)\n\n# mesh的步长\nh = .02\n\nx_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1\ny_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1\nxx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n\nz = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])\n\nz = z.reshape(xx.shape)\nplt.figure(1)\nplt.clf()\nplt.imshow(z, interpolation='nearest',\n extend=(xx.min(), xx.max(), yy.min(), yy.max()),\n cmap=plt.cm.Paired\n #aspect='auto' \n # origin='lower'\n )\n\nplt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)\n\ncentroids = kmeans.cluster_centers_\nplt.scatter(centroids[:, 0], centroids[:, 1],\n marker='x', s=169, linewidths=3,\n color='w', zorder=10)\nplt.title('K-Means clustering on the dataset (PCA-reduced data)\\n'\n 'Centroids are marked with white cross')\nplt.xlim(x_min, x_max)\nplt.ylim(y_min, y_max)\nplt.xticks(())\nplt.yticks(())\nplt.show()", "10 PySpark简介\n扩展我们的算法:有时我们需要处理大量数据,并且采样已经无效,这个时候可以通过把数据分到多个机器来处理。\nSpark是一个用来并行进行大数据处理的API。它将数据切割到集群来处理。在开发阶段,我们可以只在本地运行。\n我们使用PySpark Shell来连接到集群。\n运行下面路径的pyspark,会启动PySpark Shell\n~/spark/bin/pyspark (Max/Linux)\nC:\\spark\\bin\\pyspark (Windows)\n此时,可以在Shell中运行文件加载:\nlines = sc.textFile(\"README.md\")\nlines.first() # 加载第一行\n可以在http://localhost:4040查看PySpark运行的Job\n大多数情况下,我们希望能够在Jupyter Notebook中运行PySpark,为此,我们需要设置环境变量:\nPYSPARK_PYTHON=python3\nPYSPARK_DRIVER_PYTHON=\"jupyter\"\nPYSPARK_DRIVER_PYTHON_OPTS=\"notebook\"\n然后运行~/spark/bin/pyspark,最后一个命令会启动一个jupyter server,样子跟我们用的一样。", "lines = sc.text('README.md')\nlines.take(5)", "我们看看http://localhost:4040 可以查看运行的Job", "linesWithSpark = lines.filter(lambda line: 'spark' in line)\nlinesWithSpark.count()", "Spark的基本类型是RDD(resilient distributed dataset),它是基本分布式数据类型。RDD有两类操作,第一个是变换(transformation),返回值仍然是RDD,另外一种是动作(action),用来计算结果。Spark的操作是Lazy的,也就是说只有在执行action时才会真正的开始处理。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JasonNK/udacity-dlnd
autoencoder/Simple_Autoencoder_Solution.ipynb
mit
[ "A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)", "Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.", "img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.", "# Size of the encoding layer (the hidden layer)\nencoding_dim = 32\n\nimage_size = mnist.train.images.shape[1]\n\ninputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')\n\n# Output of hidden layer\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits\nlogits = tf.layers.dense(encoded, image_size, activation=None)\n# Sigmoid output from\ndecoded = tf.nn.sigmoid(logits, name='output')\n\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)", "Training", "# Create the session\nsess = tf.Session()", "Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).", "epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
daniestevez/jupyter_notebooks
Falcon-9/Falcon-9 frames.ipynb
gpl-3.0
[ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport collections\n\nplt.rcParams['figure.figsize'] = (12,6)\nplt.rcParams['figure.facecolor'] = 'w'", "This notebook shows an analysis of the Falcon-9 upper stage S-band telemetry frames. It is based on r00t.cz's analysis.\nThe frames are CCSDS Reed-Solomon frames with an interleaving depth of 5, a (255,239) code, and an (uncoded) frame size of 1195 bytes.", "x = np.fromfile('falcon9_frames_20210324_084608.u8', dtype = 'uint8')\nx = x.reshape((-1, 1195))", "The first byte of all the frames is 0xe0. Here we see that one of the frames has an error in this byte.", "collections.Counter(x[:,0])", "The next three bytes form a header composed of a 13 bit frame counter and an 11 bit field that indicates where the first packet inside the payload starts (akin to a first header pointer in CCSDS protocols).", "header = np.unpackbits(x[:,1:4], axis = 1)\ncounter = header[:,:13]\ncounter = np.concatenate((np.zeros((x.shape[0], 3), dtype = 'uint8'), counter), axis = 1)\ncounter = np.packbits(counter, axis = 1)\ncounter = counter.ravel().view('uint16').byteswap()\nstart_offset = header[:,-11:]\nstart_offset = np.concatenate((np.zeros((x.shape[0], 5), dtype = 'uint8'), start_offset), axis = 1)\nstart_offset = np.packbits(start_offset, axis = 1)\nstart_offset = start_offset.ravel().view('uint16').byteswap()\n\nplt.plot(counter, '.')\nplt.title('Falcon-9 frame counter')\nplt.ylabel('13-bit frame counter')\nplt.xlabel('Decoded frame');", "Valid packets contain a 2 byte header where the 4 MSBs are set to 1 and the remaining 12 bits indicate the size of the packet payload in bytes (so the total packet size is this value plus 2). Using this header, the packets can be defragmented in the same way as CCSDS Space Packets transmitted using the M_PDU protocol.", "def packet_len(packet):\n packet = np.frombuffer(packet[:2], dtype = 'uint8')\n return (packet.view('uint16').byteswap()[0] & 0xfff) + 2\n\ndef valid_packet(packet):\n return packet[0] >> 4 == 0xf\n\ndef defrag(x, counter, start_offset):\n packet = bytearray()\n frame_count = None\n \n for frame, count, first in zip(x, counter, start_offset):\n frame = frame[4:]\n if frame_count is not None \\\n and count != ((frame_count + 1) % 2**13):\n # broken stream\n packet = bytearray()\n frame_count = count\n\n if first == 0x7fe:\n # only idle\n continue\n elif first == 0x7ff:\n # no packet starts\n if packet:\n packet.extend(frame)\n continue\n \n if packet:\n packet.extend(frame[:first])\n packet = bytes(packet)\n yield packet, frame_count\n\n while True:\n packet = bytearray(frame[first:][:2])\n if len(packet) < 2:\n # not full header inside frame\n break\n first += 2\n if not valid_packet(packet):\n # padding found\n packet = bytearray()\n break\n length = packet_len(packet) - 2\n packet.extend(frame[first:][:length])\n first += length\n if first > len(frame):\n # packet does not end in this frame\n break\n packet = bytes(packet)\n yield packet, frame_count\n packet = bytearray()\n if first == len(frame):\n # packet just ends in this frame\n break\n\n\npackets = list(defrag(x, counter, start_offset))", "Only ~76% of the frames payload contains packets. The rest is padding.", "sum([len(p[0]) for p in packets])/x[:,4:].size", "After the 2 byte header, the next 8 bytes of the packet can be used to identify its source or type.", "source_ids = [p[0][2:10].hex().upper() for p in packets]\ncollections.Counter(source_ids)", "Some packets have 64-bit timestamps starting 3 bytes after the packet source ID. These give nanoseconds since the GPS epoch.", "timestamps = np.datetime64('1980-01-06') + \\\n np.array([np.frombuffer(p[0][13:][:8], dtype = 'uint64').byteswap()[0] for p in packets]) \\\n * np.timedelta64(1, 'ns')\ntimestamps_valid = (timestamps >= np.datetime64('2021-01-01')) & (timestamps <= np.datetime64('2022-01-01'))\n\nplt.plot(timestamps[timestamps_valid],\n np.array([p[1] for p in packets])[timestamps_valid], '.')\nplt.title('Falcon-9 packet timestamps')\nplt.xlabel('Timestamp (GPS time)')\nplt.ylabel('Frame counter');", "Video packets\nVideo packets are stored in a particular source ID. If we remove the first 25 and last 2 bytes of these packets, we obtain 5 188-byte transport stream packets.", "video_source = '01123201042E1403'\nvideo_packets = [p for p,s in zip(packets, source_ids)\n if s == video_source]\nvideo_ts = bytes().join([p[0][25:-2] for p in video_packets])", "Only around 28% of the transmitted data is the transport stream video.", "len(video_ts)/sum([len(p[0]) for p in packets])\n\nwith open('/tmp/falcon9.ts', 'wb') as f:\n f.write(video_ts)\n\nts = np.frombuffer(video_ts, dtype = 'uint8').reshape((-1,188))\n\n# sync byte 71 = 0x47\nnp.unique(ts[:,0])\n\n# TEI = 0\nnp.unique(ts[:,1] >> 7)\n\npusi = (ts[:,1] >> 6) & 1\n# priority = 0\nnp.unique((ts[:,1] >> 5) & 1)\n\npid = ts[:,1:3].ravel().view('uint16').byteswap() & 0x1fff\nnp.unique(pid)\n\nfor p in np.unique(pid):\n print(f'PID {p} ratio {np.average(pid == p) * 100:.1f}%')\n\n# TSC = 0\nnp.unique(ts[:,3] >> 6)\n\nadaptation = (ts[:,3] >> 4) & 0x3\nnp.unique(adaptation)\n\ncontinuity = ts[:,3] & 0xf\n\nfor p in np.unique(pid):\n print('PID', p, 'PUSI values', np.unique(pusi[pid == p]),\n 'adaptation field values', np.unique(adaptation[pid == p]))\n\npcr_pid = ts[pid == 511]\npcr = np.concatenate((np.zeros((pcr_pid.shape[0], 2), dtype = 'uint8'), pcr_pid[:,6:12]), axis = 1)\npcr = pcr.view('uint64').byteswap().ravel()\npcr_base = pcr >> 15\npcr_extension = pcr & 0x1ff\npcr_value = (pcr_base * 300 + pcr_extension) / 27e6\n\nvideo_timestamps = timestamps[[s == video_source for s in source_ids]]\nts_timestamps = np.repeat(video_timestamps, 5)\npcr_pid_timestamps = ts_timestamps[pid == 511]\n\nplt.plot(pcr_pid_timestamps, pcr_value, '.')\nplt.title('Falcon-9 PCR timestamps')\nplt.ylabel('PID 511 PCR (s)')\nplt.xlabel('Packet timestamp (GPS time)');", "GPS log", "gps_source = '0117FE0800320303'\ngps_packets = [p for p,s in zip(packets, source_ids)\n if s == gps_source]\n\ngps_log = ''.join([str(g[0][25:-2], encoding = 'ascii') for g in gps_packets])\nwith open('/tmp/gps.txt', 'w') as f:\n f.write(gps_log)\nprint(gps_log)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JShadowMan/package
python/course/ch02-syntax-and-container/常用容器.ipynb
mit
[ "Python中的常用容器\nPython中的容器类型包括序列类型, 集合类型, 映射类型\n序列类型\nPython中有以下三种基本序列类型, 所谓序列(Sequence)即表示容器内的元素是有顺序的.\n * tuple(元组): 元组是不可变序列,通常用于储存异构数据的多项集, 当然也可以被用于储存同构数据.\n * list(列表): 列表是可变序列,通常用于存放同类项目的集合.\n * range: range对象表示不可变的数字序列,通常用于在 for 循环中循环指定的次数\ntuple元组\n元组是不可变序列,通常用于储存异构数据的多项集(例如映射类型dict的items()方法的返回值)", "# 空元祖可以直接使用 `()` 进行声明, 也可以使用tuple()进行声明\n(), type(()), tuple(), type(tuple())\n\n# 单个元素的元组使用`()`的方式声明必须在元素之后增加`,`\nprint( (1), (1,) )\n\nt1 = (1, 2.33, (\"tuple\",), [\"hello\"])", "元组中为不可变序列, 具体体现为元组元素__指向__的内存地址不能变, 但是如果元素为可变类型, 那元素的内容可变.", "t1[0] = 2\n\nt1[1] = 6.66\n\nt1[2] = (\"variadic\",)\n\nt1[3].append(\"world\")\n\nprint(t1)", "list列表\n列表是可变序列,通常用于存放同类项目的集合, 通常用于储存相同类型的数据.当然也可以储存不同类型的数据.", "# 与tuple相同, 可以直接使用 [] 或者 list() 来声明一个空数组\nprint( [], type([]), list(), type(list()) )\n\n# 与tuple不同的是, 在声明单个元素的时候不需要在元素之后多写一个 ,\n\n[1], type([1])\n\nlist1 = [1, 2.33, (\"tuple\",), [\"hello\"]]", "另一个与tuple不同的是, list中元素指向的内存地址是可变的. 这就意味着我们能修改元素的指向", "list1[0] = 2\n\n# 这里同时修改了类型\nlist1[1] = \"6.66\"\n\n# 思考是否可以这样修改, 为什么?\nlist1[2][0] = \"variadic\"\n\n# 思考是否可以这样修改, 为什么?\nlist1[2] = \"variadic\"\n\nlist1[3].append(\"world\")\n\nprint(list1)", "比较常用的是, 我们可以使用sort方法来排序列表中的字段", "list2 = [1, 2, 0, -1, 9, 7, 6, 5]\nlist2.sort()\nlist2", "range对象\n对于range对象, 我们之前有简单的介绍过, 这里我们复习一下.\nrange类型表示不可变的数字序列,一般用于在 for 循环中循环指定的次数。\nrange 类型相比常规 list 或 tuple 的优势在于一个 range 对象总是占用固定数量的(较小)内存.\n不论其所表示的范围有多大(因为它只保存了 start, stop 和 step 值,并会根据需要计算具体单项或子范围的值)", "# 只传递一个参数的话, 表示从`0`开始到`X`的序列\nlist(range(10))\n\n# 如果传递2个参数, 表示从`i`到`j`之间的序列\nlist(range(2, 8))\n\n# 如果传递了3个参数, 表示从`i`到`j`步长为`k`的序列\nlist(range(0, 10, 2))", "扩展阅读, 其实range的简单实现其实看做是一个生成器, 当然实际的range不是一个生成器这么简单", "def my_range(stop: int):\n start = 0\n while start != stop:\n yield start\n start += 1\nlist(my_range(10))", "容器中的基本操作\n判断元素是否存在于一个容器中\n使用in或者not in来判断一个元素是否存在于一个容器中", "1 in [1, 2, 3], 2 not in [1, 2, 3]", "容器拼接\n在Python中我们可以非常简单的使用+进行容器的拼接.", "[1, 2, 3] + [4, 5, 6]\n\n(1, 2, 3) + (4, 5, 6)\n\n[1, 2, 3] + list(range(4, 10))", "重复容器内元素\n在Python中可以将一个容器 * 一个整数, 可以得到这个容器重复 X 次的结果.", "[[]] * 3, [[1, [2, ]]] * 3\n\nlist(range(5)) * 3", "其余操作\n\n使用len()获取容器的长度\n使用min()获取容器中的最小项\n使用max()获取容器中的最大项\n使用s.count(x)获取容器s中x出现的次数", "len([]), len([()]), len(([], [], [])), len(range(10))\n\nmin([1, 2, 3, 4, -1]), max([1, 2, 3, 4, -1])\n\nimport random\n\nlist3 = []\nfor _ in range(1000):\n list3.append(random.randint(0, 10))\nprint(list3.count(6))", "容器切片(重点)\n在Python中, 切片是非常好用且非常常用的一个特性.\n\n使用[i]获取第i项数据(从0开始)\n使用[i:j]获取从i到j的切片(左闭右开)\n使用[i:j:k]获取i到j步长为k的切片", "list4 = list(range(10)); print(list4)\n\n# python中支持使用负数作为索引, 表示从尾部开始取, 注意: 负数的起始值为 -1\nlist4[-1], list4[-2], list4[-3]\n\n# 获取一个切片, 从第2个元素到底6个元素\nlist4[2:6]\n\n# 如果不写 i 表示从头部开始取\nlist4[:5]\n\n# 如果不写 j 表示取到尾部为止\nlist4[5:]\n\n# 思考: 如果 i 和 j 都不写, 那会打印什么?\nlist4[:]\n\n# 第三个参数为步长参数, 表示每次隔几个元素取一次值\nlist4[1:8:2]\n\n# 思考1: 如何反转一个容器?\n\n# 思考2: 如下打印什么\nlist4[::]", "集合类型\nPython中的集合类型有set和frozenset(较少使用, 表示创建之后不可修改, 可以简单看做是tuple版本的set).\nset对象和frozenset对象是由具有唯一性的 hashable 对象所组成的__无序__多项集.\n常见的用途包括成员检测、从序列中去除重复项以及数学中的集合类计算,例如交集、并集、差集与对称差集等等。", "set1 = set([1, 2, 3, 1, 4, 5, 5, 2])\n\nset1.update([6, 7, 2, 3]); print(set1)\n\nfset1 = frozenset([1, 2, 3, 1, 4, 5, 5, 2])\n\n# 可以使用 dir 来打印一个对象所包含的所有内容\nprint(dir(fset1))", "映射类型\n在Python中仅有一种映射类型, 即dict. 即javascript中的对象, php中的命名数组. 映射属于无序可变对象.\n字典的键 几乎 可以是任何值。 非 hashable 的值,即包含列表、字典或其他可变类型的值(此类对象基于值而非对象标识进行比较)不可用作键。 数字类型用作键时遵循数字比较的一般规则:如果两个数值相等 (例如 1 和 1.0) 则两者可以被用来索引同一字典条目。 (但是请注意,由于计算机对于浮点数存储的只是近似值,因此将其用作字典键是不明智的。)", "dict1 = {\n \"key1\": \"value1\",\n 123: 456,\n 123.0: 789,\n (\"k\", \"e\", \"y\"): (\"k\", \"e\", \"y\")\n}\nprint(dict1)\n\n# 如果获取字典中不存在的键, 则会发生KeyError错误\ndict1[\"key2\"]\n\n# 为了避免这个问题我们可以使用get方法获取字典中的值, 并自定义获取不到的时候返回的默认值\ndict1.get(\"key2\", \"default value\")\n\n# 这里我们可以更近一步, 使用setdefault方法来当获取不到指定键的值的时候自动新增一个默认值, 并且返回默认值\nprint(dict1.setdefault(\"key2\", \"default value and set\"))\nprint(dict1)\n\n# 但是可以给字典中不存在的键赋值\ndict1[\"key3\"] = \"value3\"\nprint(dict1)\n\n# 在字典上, 我们也可以使用`in`和`not in`判断一个键是否存在于一个字典中\n\"key1\" in dict1, \"key2\" not in dict1", "常用的原生扩展容器\nPython中的常用原生扩展容易位于包collections中\n\nnamedtuple命名元组: 简易的纯属性类声明方式, 实际上是一个元组\ndeque双向链表: 用于解决需要频繁插入删除的业务下原生list效率太差的问题\nOrderedDict有序字典: 用于解决原生dict无序的问题\n\nnamedtuple命名元组", "from collections import namedtuple\n\nPointer = namedtuple('Pointer', 'x y')\nCoordinate = namedtuple('Coordinate', 'x, y, z')\n\nstart, end = Pointer(0, 0), Pointer(9, 9)\ncoord1, coord2 =Coordinate(0, 0, 0), Coordinate(9, 9, 9)\n\nprint(start, end, coord1, coord2)\nprint(start.x, start.y)\nprint(coord2.x, coord2.y, coord2.z)", "deque双向链表", "from collections import deque\n\nprint(dir(deque()))", "OrderedDict有序字典", "from collections import OrderedDict\n\nod = OrderedDict()\nod[\"k1\"] = 123\nod[\"k2\"] = 123\nod[\"k3\"] = 123\n\nfor k, v in od.items():\n print(k, v)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ITAM-DS/analisis-numerico-computo-cientifico
libro_optimizacion/temas/4.optimizacion_en_redes_y_prog_lineal/4.1/Programacion_lineal_y_metodo_simplex.ipynb
apache-2.0
[ "(PROGLINEAL)=\n4.1 Programación lineal (PL) y método símplex\n```{admonition} Notas para contenedor de docker:\nComando de docker para ejecución de la nota de forma local:\nnota: cambiar &lt;ruta a mi directorio&gt; por la ruta de directorio que se desea mapear a /datos dentro del contenedor de docker y &lt;versión imagen de docker&gt; por la versión más actualizada que se presenta en la documentación.\ndocker run --rm -v &lt;ruta a mi directorio&gt;:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:&lt;versión imagen de docker&gt;\npassword para jupyterlab: qwerty\nDetener el contenedor de docker:\ndocker stop jupyterlab_optimizacion\nDocumentación de la imagen de docker palmoreck/jupyterlab_optimizacion:&lt;versión imagen de docker&gt; en liga.\n```\n\n```{admonition} Al final de esta nota la comunidad lectora:\n:class: tip\n\n\nConocerá el modelo de programación lineal, su interpretación y diferentes formas del mismo.\n\n\nComprenderá el método gráfico y aspectos esenciales del método de símplex para resolver programas lineales.\n\n\nAprenderá las definiciones de programación entera, mixta y binaria.\n\n\nTendrá una lista de métodos heurísticos y meta heurísticas que ayudan a resolver problemas de optimización, en particular de optimización combinatoria.\n\n\n```\n```{sidebar} Un poco de historia ...\nEl desarrollo de la programación lineal (PL) ha sido clasificado como uno de los avances científicos más importantes de mediados del siglo XX. Es quizás el modelo prototipo de la optimización con restricciones. El efecto que ha tenido en la práctica y en áreas del conocimiento desde 1950 es en verdad grande. El tipo más común de aplicación abarca el problema general de asignar de la mejor manera posible, esto es, de forma óptima, recursos limitados a actividades que compiten entre sí por ellos. Con más precisión, se desea elegir el nivel de ciertas actividades que compiten por recursos escasos necesarios para realizarlas y se puedan asignar recursos a tales actividades. El desarrollo por Dantzig del método símplex para resolver programas lineales en los $40$'s marcó el inicio de la era moderna en optimización. \nLa PL utiliza un modelo matemático para describir el problema. El adjetivo lineal significa que todas las funciones del modelo deben ser funciones lineales. En este caso, la palabra programación no se refiere a términos computacionales; en esencia es sinónimo de planeación. Por lo tanto, la PL involucra la planeación de actividades para obtener un resultado óptimo; esto es, el resultado que mejor alcance la meta establecida, de acuerdo con el modelo matemático, entre todas las alternativas factibles.\nAunque la asignación de recursos a las actividades es la aplicación más frecuente en PL, cualquier problema cuyo modelo se ajuste al formato general del modelo de PL, es un problema de PL.\n```\n(FORMAESTPL)=\nForma estándar de un PL\nUn programa lineal (PL) en su forma estándar es un problema de optimización con una función lineal objetivo, un conjunto de restricciones lineales de igualdad y un conjunto de restricciones no negativas impuestas a las variables. Es un modelo de optimización de la forma:\n$$\\displaystyle \\min_{x \\in \\mathbb{R}^n} c^Tx$$\n$$\\text{sujeto a:}$$\n$$Ax=b$$\n$$x \\geq 0$$\ndonde: $c \\in \\mathbb{R}^n$ es un vector de costos, $A \\in \\mathbb{R}^{m \\times n}$, se asume $m < n$ y tiene rank completo por renglones y la última desigualdad se refiere a que todas las componentes del vector $x \\in \\mathbb{R}^n$ son mayores o iguales a cero (son mayores o iguales a cero de una forma pointwise). La función objetivo es $f_o(x) = c^Tx$ y se busca minimizar el costo. El modelo anterior realiza suposiciones como son: proporcionalidad y aditividad para la función objetivo y restricciones. Tales supuestos se deben mantener respecto a las variables en $x$. La proporcionalidad se resume en que los exponentes de cada componente de $x$ deben ser igual a uno y la aditividad en cuanto a que las contribuciones individuales de las componentes de $x$ es su suma en la función objetivo y restricciones.\n```{admonition} Comentario\nEl PL es un problema convexo pues una función lineal es convexa y cóncava al mismo tiempo. Obsérvese que la forma estándar de un problema convexo pide que el problema se escriba con desigualdades del tipo $\\leq$. Tal forma se puede obtener si se definen $h: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m, f:\\mathbb{R}^n \\rightarrow \\mathbb{R}^n$, $h(x) = Ax-b$, $f(x) = -x$ con $x \\in \\mathbb{R}^n$, ver {ref}problemas de optimización convexa en su forma estándar o canónica &lt;PROBOPTCONVEST&gt;.\n```\n(EJFLUJOENREDESYPL)=\nEjemplo de flujo en redes\nConsidérese el problema de satisfacer el flujo neto de todos los nodos con etiquetas \"A, B, C, D\" y \"E\" de la siguiente red de acuerdo a las capacidades de cada uno de ellos al menor costo posible:", "import pprint\nfrom scipy.optimize import linprog\nimport numpy as np\nimport networkx as nx\nimport matplotlib.pyplot as plt\n\nnodes_pos = [[0.18181818181818182, 0.7272727272727273],\n [0.18181818181818182, 0.2727272727272727],\n [0.5454545454545454, 0.2727272727272727],\n [0.5454545454545454, 0.7272727272727273],\n [0.36363636363636365, 0.5454545454545454]]\n\nnodes = ['A', 'B', 'E', 'D', 'C']\n\nnodes_and_pos = dict(zip(nodes, nodes_pos))\n\nG_min_cost_flow = nx.DiGraph()\n\nG_min_cost_flow.add_node('A', netflow = 50, node_and_netflow=\"A [50]\")\nG_min_cost_flow.add_node('B', netflow = 40, node_and_netflow=\"B [40]\")\nG_min_cost_flow.add_node('C', netflow = 0, node_and_netflow=\"C [0]\")\nG_min_cost_flow.add_node('D', netflow = -30, node_and_netflow=\"D [-30]\")\nG_min_cost_flow.add_node('E', netflow = -60, node_and_netflow=\"E [-60]\")\n\nedge_labels_min_cost_flow = {('A', 'B'): {\"weight\": 2},\n ('A', 'C'): {\"weight\": 4},\n ('A', 'D'): {\"weight\": 9},\n ('B', 'C'): {\"weight\": 3},\n ('C', 'E'): {\"weight\": 1},\n ('E', 'D'): {\"weight\": 2},\n ('D', 'E'): {\"weight\": 3}\n }\n\n\nG_min_cost_flow.add_edges_from(edge_labels_min_cost_flow)\nfor e in G_min_cost_flow.edges():\n G_min_cost_flow[e[0]][e[1]][\"weight\"] = edge_labels_min_cost_flow[e][\"weight\"]\n \nplt.figure(figsize=(9, 9))\nnx.draw_networkx_edges(G_min_cost_flow, pos=nodes_and_pos, \n alpha=0.3,\n min_target_margin=25, connectionstyle=\"arc3, rad = 0.1\")\nnx.draw_networkx_edge_labels(G_min_cost_flow, pos=nodes_and_pos, \n edge_labels=edge_labels_min_cost_flow, label_pos=0.4,\n font_size=10)\nnodes_pos_modified = {}\n\ny_off = 0.03\n\nnodes_and_pos_modified = nodes_and_pos.copy()\n\nfor node in G_min_cost_flow.nodes():\n if node == 'B' or node == 'E':\n nodes_and_pos_modified[node] = [nodes_and_pos_modified[node][0], \n nodes_and_pos_modified[node][1] - y_off]\n else:\n nodes_and_pos_modified[node] = [nodes_and_pos_modified[node][0], \n nodes_and_pos_modified[node][1] + y_off]\n \n \nlabels = nx.get_node_attributes(G_min_cost_flow, \"node_and_netflow\")\n\nnx.draw_networkx_labels(G_min_cost_flow, pos=nodes_and_pos_modified, \n labels=labels)\nnx.draw_networkx_nodes(G_min_cost_flow, pos=nodes_and_pos, \n node_size=1000, alpha=0.6)\nplt.axis(\"off\")\nplt.show() ", "En la red anterior el arco $(D, E)$ tiene costo igual a $3$ y el arco $(E, D)$ tiene costo igual a $2$.\n```{margin}\nObsérvese que es ligeramente distinta la nomenclatura de este problema en cuanto a los términos de flujo neto y demanda que tiene un nodo de acuerdo a lo que se describe en el {ref}ejemplo de flujo de costo mínimo &lt;EJREDFLUJOCOSTOMIN&gt;\n```\nAl lado de cada nodo en corchetes se presenta el flujo neto generado por el nodo. Los nodos origen tienen un flujo neto positivo y en la red son los nodos \"A\" y \"B\" (por ejemplo fábricas). Los nodos destino tienen un flujo neto negativo que en la red son los nodos \"D\" y \"E\" (por ejemplo clientes). El único nodo de transbordo es el nodo \"C\" que tiene flujo neto igual a cero (centro de distribución por ejemplo). Los valores de los costos se muestran en los arcos. Es una red sin capacidades.\nEntonces el modelo de PL que minimiza el costo de transferencia de flujo de modo que el flujo neto satisfaga lo representado en la red, considerando el flujo neto como el flujo total que sale del nodo menos el flujo total que entra al nodo es:\n$$\\displaystyle \\min_{x \\in \\mathbb{R}^7} 2 x_{AB} + 4 x_{AC} + 9 x_{AD} + 3 x_{BC} + x_{CE} + 3 x_{DE} + 2x_{ED}$$\n$$\\text{sujeto a: }$$\n$$\n\\begin{eqnarray}\n&x_{AB}& + &x_{AC}& + &x_{AD}& && && && && &=& 50 \\nonumber \\\n&-x_{AB}& && && + &x_{BC}& && && && &=& 40 \\nonumber \\\n&& - &x_{AC}& && - &x_{BC}& + &x_{CE}& && && &=& 0 \\nonumber \\\n&& && - &x_{AD}& && && + &x_{DE}& - &x_{ED}& &=& -30 \\nonumber \\\n&& && && && - &x_{CE}& - &x_{DE}& + &x_{ED}& &=& -60 \\nonumber\n\\end{eqnarray}\n$$\n$$x_{ij} \\geq 0 \\forall i,j$$\nLa primer restricción de igualdad representa el flujo neto para el nodo $A$ y la última el flujo neto para el nodo $E$. A tales ecuaciones de las restricciones de igualdad se les conoce con el nombre de ecuaciones de conservación de flujo.\n```{admonition} Observación\n:class: tip\nObsérvese que la matriz que representa a las restricciones de igualdad es la matriz de incidencia nodo-arco. Ver {ref}Representación de redes: matriz de incidencia nodo-arco &lt;MATINCIDNODOARCO&gt;\n```\n```{margin}\nMultiplicamos por $-1$ pues el resultado de la función incidence_matrix está volteado respecto a la definición de la matriz de incidencia nodo-arco. \n```", "print(-1*nx.incidence_matrix(G_min_cost_flow, oriented=True).todense())", "El problema anterior lo podemos resolver directamente con scipy-optimize-linprog que es una función que resuelve PL's:", "c = np.array([2, 4, 9, 3, 1, 3, 2])\n\nA_eq = -1*nx.incidence_matrix(G_min_cost_flow, oriented=True).todense()\n\nprint(A_eq)\n\nb = list(nx.get_node_attributes(G_min_cost_flow, \n \"netflow\").values())", "```{margin}\nCada tupla hace referencia a las cotas inferiores y superiores que tiene cada variable.\n```", "bounds = [(0, None), (0,None), (0,None), (0,None), (0,None), (0, None), (0, None)]\n\nprint(linprog(c=c, A_eq=A_eq, b_eq=b,bounds=bounds))", "```{margin}\nLos solvers son paquetes de software para resolver modelos de programación lineal y modelos relacionados que se encuentran en los lenguajes de modelado.\n```\n```{margin}\nSe instala cvxopt, un paquete de Python para resolver problemas de optimización convexa que ya trae el solver GLPK, ver cvxpy: install-with-cvxopt-and-glpk-support.\n```\nTambién con cvxpy podemos resolver el PL anterior. Para mostrar la flexibilidad que tienen los lenguajes de modelado como cvxpy se define $x$ como variable entera. cvxpy puede resolver este tipo de problemas si se instala el solver GLPK :", "!pip install --quiet cvxopt", "```{margin}\nVer cvxpy: linear_program\n```", "import cvxpy as cp\n\nn = 7 #number of variables\nx = cp.Variable(n, integer=True) #x as integer optimization variable\nfo_cvxpy = c.T@x #objective function\n\nconstraints = [A_eq@x == b,\n x >=0\n ]\n\nopt_objective = cp.Minimize(fo_cvxpy)\n\nprob = cp.Problem(opt_objective, constraints)\nprint(prob.solve())\n\n# Print result.\nprint(\"\\nThe optimal value is\", prob.value)\nprint(\"A solution x is\")\nprint(x.value)", "(EJPROTOTIPO)=\nEjemplo prototipo\nSupóngase que una compañía tiene tres plantas en las que se producen dos productos. La compañía nos entrega los siguientes datos relacionados con:\n\n\nNúmero de horas de producción disponibles por semana en cada planta para fabricar estos productos.\n\n\nNúmero de horas de fabricación para producir cada lote de los productos.\n\n\nLa ganancia por lote de cada producto.\n\n\nLo anterior se resume en la siguiente tabla:\n| |Tiempo de producción por lote en horas |||\n|:---:|:---:|:---:|:---:|\n| Planta |Producto 1|Producto 2| Tiempo de producción disponible a la semana en horas|\n|1|1|0|4|\n|2|0|2|12|\n|3|3|2|18|\n|Ganancia por lote| 3000| 5000||\nLa tabla anterior indica en su primer renglón que cada lote del producto 1 que se produce por semana emplea una hora de producción en la planta 1 y sólo se dispone de 4 horas semanales (recursos disponibles). Como se lee en la tabla, cada producto se fabrica en lotes de modo que la tasa de producción está definida como el número de lotes que se producen a la semana. \nObsérvese que el producto 1 requiere parte de la capacidad de producción en las plantas 1, 3 y nada en la planta 2. El producto 2 necesita trabajo en las plantas 2 y 3. Por lo anterior no está claro cuál mezcla de productos sería la más rentable.\nSe permite cualquier combinación de tasas de producción que satisfaga estas restricciones, incluso no fabricar uno de los productos y elaborar todo lo que sea posible del otro. La tasa de producción está definida como el número de lotes que se producen a la semana.\nSe desea determinar cuáles tasas de producción (no negativas) deben tener los dos productos con el fin de maximizar las ganancias totales sujetas a las restricciones impuestas por las capacidades de producción limitadas disponibles en las tres plantas.\nSe asume que las plantas únicamente destinan su producción a estos dos productos y la ganancia incremental de cada lote adicional producido es constante sin importar el número total de lotes producidos. La ganancia total de cada producto es aproximadamente la ganancia por lote que se produce multiplicada por el número de lotes.\nSe modela el problema anterior como un PL con las siguientes variables:\n$x_1$: número de lotes del producto 1 que se fabrican por semana.\n$x_2$: número de lotes del producto 2 que se fabrican por semana.\n$f_o(x_1, x_2)$: ganancia semanal total (en miles de pesos) que generan estos dos productos.\nSe debe resolver el PL siguiente:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^2} 3x_1 + 5x_2$$\n$$\\text{sujeto a: }$$\n$$x_1 \\leq 4$$\n$$2x_2 \\leq 12$$\n$$3x_1 + 2x_2 \\leq 18$$\n$$x_1 \\geq 0, x_2 \\geq 0$$\nEl término $3x_1$ representa la ganancia generada (en miles de pesos) cuando se fabrica el producto 1 a una tasa de $x_1$ lotes por semana. Se tienen contribuciones individuales de cada producto a la ganancia.\nModelo de PL\nAlgunas características generales de los problemas de PL se presentan a continuación\nTerminología en PL\n|Ejemplo prototipo | Problema general|\n|:---:|:---:|\n|Capacidad producción de las plantas | Recursos|\n|3 plantas | m recursos |\n|Fabricación de productos | Actividades |\n|2 productos | n actividades|\n|Tasa de producción del producto | Nivel de la actividad|\n|Ganancia | Medida global de desempeño|\nY en la terminología del problema general se desea determinar la asignación de recursos a ciertas actividades. Lo anterior implica elegir los niveles de las actividades (puntos óptimos) que lograrán el mejor valor posible (valor óptimo) de la medida global de desempeño.\nEn el PL:\n$f_o$: valor de la medida global de desempeño (función objetivo).\n$x_j$: nivel de la actividad $j$ con $j=1, 2, \\dots, n$. También se les conoce con el nombre de variables de decisión (variables de optimización).\n$c_j$: incremento en $f_o$ que se obtiene al aumentar una unidad en el nivel de la actividad j.\n$b_i$: cantidad de recurso $i$ disponible para asignarse a las actividades con $i=1, 2, \\dots, m$.\n$a_{ij}$: cantidad del recurso $i$ consumido por cada unidad de la actividad $j$.\n```{admonition} Observación\n:class: tip\nLos valores de $c_j, b_i, a_{ij}$ son las constantes o parámetros del modelo.\n```\nFormas de un PL\nEs posible que se encuentren con PL en diferentes formas por ejemplo:\n1.Minimizar en lugar de maximizar la función objetivo.\n2.Restricciones con desigualdad en sentido mayor, menor o igual que.\n3.Restricciones en forma de igualdad.\n4.Variables de decisión sin la restricción de no negatividad (variables libres).\nPero siempre que se cumpla con que la función objetivo y las restricciones son funciones lineales entonces tal problema se clasifica como un PL.\n```{admonition} Observación\n:class: tip\nSi se utiliza un PL con otras formas diferentes a la del ejemplo prototipo (por ejemplo variables libres en lugar de no negativas) es posible que la interpretación de \"asignación de recursos limitados entre actividades que compiten\" puede ya no aplicarse muy bien; pero sin importar cuál sea la interpretación o el contexto, lo único necesario es que la formulación matemática del problema se ajuste a las formas permitidas.\n```\n(EJMETGRAFICOPL)=\nEjemplo: método gráfico\nA continuación se muestra un procedimiento gráfico para resolver el PL del ejemplo prototipo. Esto es posible realizar pues tenemos sólo dos variables. Se tomará $x_1$ como el eje horizontal y $x_2$ el eje vertical. \nRecordando las variables del ejemplo prototipo:\n$x_1$: número de lotes del producto 1 que se fabrican por semana.\n$x_2$: número de lotes del producto 2 que se fabrican por semana.\n$f_o(x_1, x_2)$: ganancia semanal total (en miles de pesos) que generan estos dos productos.\nY el PL es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^2} 3x_1 + 5x_2$$\n$$\\text{sujeto a: }$$\n$$x_1 \\leq 4$$\n$$2x_2 \\leq 12$$\n$$3x_1 + 2x_2 \\leq 18$$\n$$x_1 \\geq 0, x_2 \\geq 0$$\nEntonces se tiene la siguiente región definida por las desigualdades del PL:", "#x_1 ≤ 4\n\npoint1_x_1 = (4,0)\n\npoint2_x_1 = (4, 10)\n\npoint1_point2_x_1 = np.row_stack((point1_x_1, point2_x_1))\n\n#x_1 ≥ 0\npoint3_x_1 = (0,0)\n\npoint4_x_1 = (0, 10)\n\npoint3_point4_x_1 = np.row_stack((point3_x_1, point4_x_1))\n\n#2x_2 ≤ 12 or x_2 ≤ 6\n\npoint1_x_2 = (0, 6)\n\npoint2_x_2 = (8, 6)\n\npoint1_point2_x_2 = np.row_stack((point1_x_2, point2_x_2))\n\n#x_2 ≥ 0\n\npoint3_x_2 = (0, 0)\n\npoint4_x_2 = (8, 0)\n\npoint3_point4_x_2 = np.row_stack((point3_x_2, point4_x_2))\n\n#3x_1 + 2x_2 ≤ 18\n\nx_1_region_1 = np.linspace(0,4, 100)\n\nx_2_region_1 = 1/2*(18 - 3*x_1_region_1)\n\n\nx_1 = np.linspace(0,6, 100)\n\nx_2 = 1/2*(18 - 3*x_1)\n\nplt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1],\n point3_point4_x_1[:,0], point3_point4_x_1[:,1],\n point1_point2_x_2[:,0], point1_point2_x_2[:,1],\n point3_point4_x_2[:,0], point3_point4_x_2[:,1],\n x_1, x_2)\n\nplt.legend([\"$x_1 = 4$\", \"$x_1 = 0$\", \n \"$2x_2 = 12$\", \"$x_2 = 0$\",\n \"$3x_1+2x_2 = 18$\"], bbox_to_anchor=(1, 1))\n\nplt.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\")\nx_1_region_2 = np.linspace(0,2, 100)\nplt.fill_between(x_1_region_2, 0, 6, color=\"plum\")\nplt.title(\"Región factible del PL\")\nplt.show()\n", "La región sombreada es la región factible. Cualquier punto que se elija en la región factible satisface las desigualdades definidas en el PL. Ahora tenemos que seleccionar dentro de la región factible el punto que maximiza el valor de la función objetivo $f_o$.\nEl procedimiento gráfico consiste en dar a $f_o$ algún valor arbitrario, dibujar la recta definida por tal valor y \"mover tal recta de forma paralela\" en la dirección que $f_o$ crece (si se desea maximizar y en la dirección en la que $f_o$ decrece si se desea minimizar) hasta que se mantenga en la región factible.\nPara la función objetivo del PL anterior queda como sigue:\n$$y = f_o(x) = 3x_1 + 5x_2$$\ny vamos dando valores arbitrarios a $y$:\n```{margin}\nTodas las rectas tienen la misma pendiente por lo que son paralelas. Cada una de las rectas son las curvas de nivel de $f_o$\n```", "plt.figure(figsize=(7,7))\nplt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], \"--\", color=\"black\", label = \"_nolegend_\")\nplt.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], \"--\", color=\"black\", label = \"_nolegend_\")\nplt.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], \"--\", color=\"black\", label = \"_nolegend_\")\nplt.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], \"--\", color=\"black\", label = \"_nolegend_\")\nplt.plot(x_1, x_2, \"--\", color=\"black\", label=\"_nolegend_\")\n\nplt.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\")\nplt.fill_between(x_1_region_2, 0, 6, color=\"plum\")\nplt.title(\"Región factible del PL\")\n\nx_1_line_1 = np.linspace(0, 4, 100)\n\nx_2_line_1 = 1/5*(-3*x_1_line_1 + 10)\n\nx_1_line_2 = np.linspace(0, 7, 100)\n\nx_2_line_2 = 1/5*(-3*x_1_line_2 + 20)\n\nx_1_line_3 = np.linspace(0, 8, 100)\n\nx_2_line_3 = 1/5*(-3*x_1_line_3 + 36)\n\nplt.plot(x_1_line_1, x_2_line_1, \"green\",\n x_1_line_2, x_2_line_2, \"indigo\",\n x_1_line_3, x_2_line_3, \"darkturquoise\"\n )\n\n\noptimal_point = (2, 6)\n\nplt.scatter(optimal_point[0], optimal_point[1], marker='o', s=150,\n facecolors='none', edgecolors='b')\n\npoint_origin = (0, 0)\n\npoint_gradient_fo = (3, 5)\n\n\npoints_for_gradient_fo = np.row_stack((point_origin,\n point_gradient_fo))\n\n\nplt.arrow(point_origin[0], point_origin[1],\n point_gradient_fo[0], point_gradient_fo[1],\n width=.05, color=\"olive\")\n\nplt.legend([\"$10 = 3x_1 + 5x_2$\",\n \"$20 = 3x_1 + 5x_2$\",\n \"$36 = 3x_1 + 5x_2$\",\n \"$\\\\nabla f_o(x)$\"], bbox_to_anchor=(1.4, 1))\n\nplt.show()\n", "Si realizamos este proceso para valores de $y$ iguales a $36, 20, 10$ observamos que la recta que da el mayor valor de la $f_o$ y que se mantiene en la región factible es aquella con valor $y_1= f_o(x) = 36$. Corresponde a la pareja $(x_1, x_2) = (2, 6)$ y es la solución óptima. Entonces produciendo los productos $1$ y $2$ a una tasa de $2$ y $6$ lotes por semana se maximiza la ganancia siendo de 36 mil pesos. No existen otras tasas de producción que sean tan redituables como la anterior de acuerdo con el modelo.\n```{admonition} Comentarios\n\n\nEl método gráfico anterior sólo funciona para dos o tres dimensiones.\n\n\nEl gradiente de la función objetivo nos indica la dirección de máximo crecimiento de $f_o$. En el ejemplo prototipo $\\nabla f_o(x) = \\left [ \\begin{array}{c} 3 \\ 5 \\end{array} \\right ]$ y tal vector apunta hacia la derecha y hacia arriba. Entonces en esa dirección es hacia donde desplazamos las rectas paralelas.\n\n\nLa región factible que resultó en el ejemplo prototipo se le conoce con el nombre de poliedro y es un conjunto convexo (en dos dimensiones se le nombra polígono). Es una intersección finita entre hiperplanos y semi espacios, también puede pensarse como el conjunto solución de un número finito de ecuaciones y desigualdades lineales.\n\n\n```\n```{admonition} Ejercicio\n:class: tip\nResuelve con el método gráfico el siguiente PL:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^2} 2x_1 + x_2$$\n$$\\text{sujeto a: }$$\n$$x_2 \\leq 10$$\n$$2x_1 + 5x_2 \\leq 60$$\n$$x_1 + x_2 \\leq 18$$\n$$3x_1 + x_2 \\leq 44$$\n$$x_1 \\geq 0, x_2 \\geq 0$$\nMarca al gradiente de la función objetivo en la gráfica.\n```\nTipo de soluciones en un PL\nLos puntos factibles que resultan de la intersección entre las rectas del ejemplo prototipo que corresponden a las desigualdades se les nombra soluciones factibles en un vértice (FEV) (se encuentran en una esquina). Las soluciones FEV no son una combinación convexa estricta entre puntos distintos del poliedro formado en la región factible (no caen en algún segmento de línea formado por dos puntos distintos).\n```{admonition} Observación\n:class: tip\nTambién a las soluciones FEV se les conoce como puntos extremos pero resulta más sencillo recordar FEV.\n```\n```{admonition} Comentario\nEl método gráfico en la región anterior ilustra una propiedad importante de los PL con soluciones factibles y una región acotada: siempre tiene soluciones FEV y al menos una solución óptima, aún más, la mejor solución en un FEV debe ser una solución óptima.\n```\n¿A qué le llamamos solución en un PL?\nCualquier conjunto de valores de las variables de decisión ($x_1, x_2, \\dots, x_n$) se le nombra una solución y se identifican dos tipos:\n\n\nUna solución factible es aquella para la cual todas las restricciones se satisfacen.\n\n\nUna solución no factible es aquella para la cual al menos una restricción no se satisface.\n\n\nEn el ejemplo prototipo los puntos $(2,3)$ y $(4,1)$ son soluciones factibles y $(-1, 3), (4,4)$ son soluciones no factibles.", "plt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], \"black\", label = \"_nolegend_\")\nplt.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], \"black\", label = \"_nolegend_\")\nplt.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], \"black\", label = \"_nolegend_\")\nplt.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], \"black\", label = \"_nolegend_\")\nplt.plot(x_1, x_2, \"black\", label = \"_nolegend_\")\n\n\nplt.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\", label = \"_nolegend_\")\nplt.fill_between(x_1_region_2, 0, 6, color=\"plum\", label = \"_nolegend_\")\n\nplt.scatter(2, 3, marker='o', s=150)\nplt.scatter(4, 1, marker='*', s=150)\nplt.scatter(-1, 3, marker='v', s=150)\nplt.scatter(4, 4, marker='^', s=150)\n\nplt.legend([\"Solución factible\", \"Solución factible\",\n \"Solución no factible\", \"Solución no factible\"])\n\nplt.title(\"Tipos de soluciones en un PL\")\nplt.show()", "```{margin}\n\"Valor más favorable de la función objetivo\" depende si se tiene un problema de maximización o minimización.\n```\nDe las soluciones factibles se busca aquella solución óptima (puede haber más de una) que nos dé el valor \"más favorable\" (valor óptimo) de la función objetivo.\nEjemplo: más de una solución óptima\nEs posible tener más de una solución óptima, por ejemplo si la función objetivo hubiera sido $f_o(x) = 3x_1 + 2x_2$ entonces:", "plt.figure(figsize=(6,6))\npoint4 = (2, 6)\npoint5 = (4, 3)\n\npoint4_point5 = np.row_stack((point4, point5))\n\nplt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1],\n point3_point4_x_1[:,0], point3_point4_x_1[:,1],\n point1_point2_x_2[:,0], point1_point2_x_2[:,1],\n point3_point4_x_2[:,0], point3_point4_x_2[:,1],\n x_1, x_2)\n\nplt.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\")\nplt.fill_between(x_1_region_2, 0, 6, color=\"plum\")\n\nplt.plot(point4_point5[:,0], point4_point5[:,1], \n linewidth=2, color = \"darkred\", linestyle='dashed')\n\nplt.legend([\"$x_1 = 4$\", \"$x_1 = 0$\", \n \"$2x_2 = 12$\", \"$x_2 = 0$\",\n \"$3x_1+2x_2 = 18$\",\n \"$18 = f_o(x) = 3x_1 + 2x_2$\"], bbox_to_anchor=(1, 1))\nplt.title(\"Región factible del PL\")\nplt.show()", "El segmento de recta que va de $(2,6)$ a $(4,3)$ (en línea punteada) son soluciones óptimas. Tal segmento es la curva de nivel de $f_o(x)$ con el valor $18$. Cualquier PL que tenga soluciones óptimas múltiples tendrá un número infinito de ellas, todas con el mismo valor óptimo.\n```{admonition} Comentario\nSi un PL tiene exactamente una solución óptima, ésta debe ser una solución FEV. Si tiene múltiples soluciones óptimas, al menos dos deben ser soluciones FEV. Por esto para resolver problemas de PL sólo tenemos que considerar un número finito de soluciones.\n```\nEjemplo: PL's sin solución\nEs posible que el PL no tenga soluciones óptimas lo cual ocurre sólo si:\n\n\nNo tiene soluciones factibles y se le nombra PL no factible.\n\n\nLas restricciones no impiden que el valor de la función objetivo mejore indefinidamente en la dirección favorable. En este caso se tiene un PL con función objetivo no acotada y se le nombra PL no acotado.\n\n\nUn ejemplo de un PL no factible pues su región factible es vacía se obtiene al añadir la restricción $3x_1+ 5x_2 \\geq 50$ a las restricciones anteriores:", "#3x_1 + 5x_2 ≥ 50\n\nx_1_b = np.linspace(0,8, 100)\n\nx_2_b = 1/5*(50 - 3*x_1_b)\n\nplt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1],\n point3_point4_x_1[:,0], point3_point4_x_1[:,1],\n point1_point2_x_2[:,0], point1_point2_x_2[:,1],\n point3_point4_x_2[:,0], point3_point4_x_2[:,1],\n x_1, x_2,\n x_1_b, x_2_b)\n\nplt.legend([\"$x_1 = 4$\", \"$x_1 = 0$\", \n \"$2x_2 = 12$\", \"$x_2 = 0$\",\n \"$3x_1+2x_2 = 18$\",\n \"$3x_1 + 5x_2 = 50$\"], bbox_to_anchor=(1, 1))\n\nplt.fill_between(x_1_b, x_2_b, 10, color=\"plum\")\nplt.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\")\nplt.fill_between(x_1_region_2, 0, 6, color=\"plum\")\nplt.title(\"No existe solución factible\")\nplt.show()", "La intersección entre las dos regiones sombreadas es vacía.\nUn ejemplo de un PL no acotado resulta de sólo considerar las restricciones $x_1 \\leq 4, x_1 \\geq 0, x_2 \\geq 0$:", "points = np.column_stack((4*np.ones(11), np.arange(11)))\nplt.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1],\n point3_point4_x_1[:,0], point3_point4_x_1[:,1],\n point3_point4_x_2[:,0], point3_point4_x_2[:,1])\n\nplt.plot(points[:,0], points[:,1], 'o', markersize=5)\n\nplt.legend([\"$x_1 = 4$\", \"$x_1 = 0$\", \n \"$x_2 = 0$\"], bbox_to_anchor=(1, 1))\n\n\nx_1_region = np.linspace(0,4, 100)\nplt.fill_between(x_1_region, 0, 10, color=\"plum\")\nplt.title(\"Región factible no acotada\")\nplt.show()\n", "Se observa en la gráfica anterior que se tiene una región factible no acotada y como el objetivo es maximizar podemos elegir el valor $x_1 = 4$ y arbitrariamente un valor cada vez más grande de $x_2$ y obtendremos una mejor solución dentro de la región factible.\n```{sidebar} Un poco de historia ...\nEl método símplex pertenece a una clase general de algoritmos de optimización con restricciones conocida como métodos de conjuntos activos en los que la tarea fundamental es determinar cuáles restricciones son activas y cuáles inactivas en la solución. Mantiene estimaciones de conjuntos de índices de restricciones activas e inactivas que son actualizadas y realiza cambios modestos a tales conjuntos en cada paso del algoritmo.\n```\n(METODOSIMPLEX)=\nMétodo símplex\nPara comprender sus conceptos fundamentales se considera un PL en una forma no estándar y se utiliza el mismo PL del ejemplo prototipo:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^2} 3x_1 + 5x_2$$\n$$\\text{sujeto a: }$$\n$$x_1 \\leq 4$$\n$$2x_2 \\leq 12$$\n$$3x_1 + 2x_2 \\leq 18$$\n$$x_1 \\geq 0, x_2 \\geq 0$$\n(SOLFEVNFEV)=\nSoluciones FEV y NFEV", "fig, ax = plt.subplots()\n\nax.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], label = \"_nolegend_\")\nax.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], label = \"_nolegend_\")\nax.plot(x_1, x_2, label = \"_nolegend_\")\n\n\nax.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\", label = \"_nolegend_\")\nx_1_region_2 = np.linspace(0,2, 100)\nax.fill_between(x_1_region_2, 0, 6, color=\"plum\", label = \"_nolegend_\")\n\n\npoint_FEV_1 = (0,0)\npoint_FEV_2 = (0,6) \npoint_FEV_3 = (2,6) \npoint_FEV_4 = (4,3) \npoint_FEV_5 = (4,0)\n\n\narray_FEV = np.row_stack((point_FEV_1,\n point_FEV_2,\n point_FEV_3,\n point_FEV_4,\n point_FEV_5))\n\npoint_NFEV_1 = (0, 9)\npoint_NFEV_2 = (4, 6)\npoint_NFEV_3 = (6, 0)\n\narray_NFEV = np.row_stack((point_NFEV_1,\n point_NFEV_2,\n point_NFEV_3))\n\n\nax.plot(array_FEV[:,0], array_FEV[:,1], 'o', color=\"orangered\", markersize=10, label=\"FEV\")\n\nax.plot(array_NFEV[:,0], array_NFEV[:,1], '*', color=\"darkmagenta\", markersize=10, label=\"NFEV\")\n\nax.legend()\n\nplt.show()", "Los puntos en la gráfica con etiqueta \"FEV\" son soluciones factibles en un vértice:\n\n$(0, 0), (0, 6), (2, 6), (4, 3), (4, 0)$\n\ny están definidos por las restricciones de desigualdad tomando sólo la igualdad, esto es, por las rectas: $x_1 = 4, 2x_2 = 12, 3x_1 + 2 x_2 = 18, x_1 = 0, x_2 = 0$. \n```{admonition} Definiciones\n\n\nA las rectas que se forman a partir de una desigualdad tomando únicamente la igualdad se les nombra ecuaciones de frontera de restricción o sólo ecuaciones de frontera.\n\n\nLas ecuaciones de frontera que definen a las FEV se les nombra ecuaciones de definición.\n\n\n```\nAnálogamente los puntos con etiqueta \"NFEV\" son soluciones no factibles en un vértice:\n\n$(0, 9), (4, 6), (6,0)$\n\ny también están definidos por las ecuaciones de frontera.\n```{admonition} Observación\n:class: tip\nAunque las soluciones en un vértice también pueden ser no factibles (NFEV) el método símplex no las revisa.\n```\n```{margin}\nEn más de dos dimensiones cada ecuación de definición genera un hiperplano en un espacio $n$ dimensional. Y la intersección de las $n$ ecuaciones de frontera es una solución simultánea de un sistema de $n$ ecuaciones lineales de definición.\n```\n```{admonition} Comentarios\n\n\nEn general para un PL con $n$ variables de decisión se cumple que cada solución FEV se define por la intersección de $n$ ecuaciones de frontera. Podría ser que se tengan más de $n$ fronteras de restricción que pasen por el vértice pero $n$ de ellas definen a la solución FEV y éstas son las ecuaciones de definición.\n\n\nCada solución FEV es la solución simultánea de $n$ ecuaciones elegidas entre $m + n$ restricciones. El número de combinaciones de las $m + n$ ecuaciones tomadas $n$ a la vez es la cota superior del número de soluciones FEV. Para el ejemplo prototipo $m = 3, n=2$ por lo que $C^{m+n}_n = C^5_2 = 10$ y sólo $5$ conducen a soluciones FEV.\n\n\n```", "fig, ax = plt.subplots()\n\nax.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], label = \"_nolegend_\")\nax.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], label = \"_nolegend_\")\nax.plot(x_1, x_2, label = \"_nolegend_\")\n\nax.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\", label = \"_nolegend_\")\nx_1_region_2 = np.linspace(0,2, 100)\nax.fill_between(x_1_region_2, 0, 6, color=\"plum\", label = \"_nolegend_\")\n\nax.plot(array_FEV[:,0], array_FEV[:,1], 'o', color=\"orangered\", markersize=10, label=\"FEV\")\n\nax.plot(array_NFEV[:,0], array_NFEV[:,1], '*', color=\"darkmagenta\", markersize=10, label=\"NFEV\")\n\nax.legend()\n\nplt.show()", "|Solución FEV| Ecuaciones de definición|\n|:---:|:---:|\n|(0,0)| $x_1 = 0, x_2 = 0$|\n|(0,6)| $x_1 = 0, 2x_2 = 12$|\n|(2,6)| $2x_2 = 12, 3x_1 + 2x_2 = 18$|\n|(4,3)| $3x_1 + 2x_2 = 18, x_1 = 4$|\n|(4,0)| $x_1 = 4, x_2 = 0$|\nFEV adyacentes\n```{admonition} Definición\nEn un PL con $n$ variables de decisión nombramos soluciones FEV adyacentes a dos soluciones FEV que comparten $n-1$ fronteras de restricción. Las soluciones FEV adyacentes están conectadas por una arista (segmento de recta)\n```", "fig, ax = plt.subplots()\n\nax.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], label = \"_nolegend_\")\nax.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], label = \"_nolegend_\")\nax.plot(x_1, x_2, label = \"_nolegend_\")\n\nax.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\", label = \"_nolegend_\")\nx_1_region_2 = np.linspace(0,2, 100)\nax.fill_between(x_1_region_2, 0, 6, color=\"plum\", label = \"_nolegend_\")\n\nax.plot(array_FEV[:,0], array_FEV[:,1], 'o', color=\"orangered\", markersize=10, label=\"FEV\")\n\nax.plot(array_NFEV[:,0], array_NFEV[:,1], '*', color=\"darkmagenta\", markersize=10, label=\"NFEV\")\n\nax.legend()\n\nplt.show()", "En el ejemplo prototipo $(0,0)$ y $(0,6)$ son adyacentes pues comparten una arista formada por la ecuación de frontera $x_1=0$ y de cada solución FEV salen dos aristas, esto es tienen dos soluciones FEV adyacentes.\n```{admonition} Comentario\nUna razón para analizar las soluciones FEV adyacentes es la siguiente propiedad: \nsi un PL tiene al menos una solución óptima y una solución FEV no tiene soluciones FEV adyacentes que sean mejores entonces ésa debe ser una solución óptima.\n```\n```{admonition} Observación\n:class: tip\nEn el ejemplo prototipo $(2, 6)$ es un punto óptimo pues sus soluciones FEV adyacentes, $(0, 6)$, $(4,3)$ tienen un valor de la función objetivo menor (recuérdese es un problema de maximización).\n```\nPasos que sigue el método símplex\nPara el ejemplo prototipo el método símplex a grandes rasgos realiza lo siguiente:\nPaso inicial: se elige $(0,0)$ como la solución FEV inicial para examinarla (esto siempre se puede hacer para problemas con restricciones de no negatividad).\n```{margin}\nEn el ejemplo numérico se entenderá la frase \"solución FEV adyacente que es mejor\"\n```\nPrueba de optimalidad: revisar condición de optimalidad para $(0,0)$. Concluir que $(0,0)$ no es una solución óptima (existe una solución FEV adyacente que es mejor).\nIteración 1: moverse a una solución FEV adyacente mejor, para esto se realizan los pasos:\n1.Entre las dos aristas de la región factible que salen de $(0,0)$ se elige desplazarse a lo largo de la arista que aumenta el valor de $x_2$ (con una función objetivo $f_o(x) = 3x_1 + 5x_2$ si $x_2$ aumenta entonces el valor de $f_o$ crece más que con $x_1$).\n2.Detenerse al llegar a la primera ecuación de frontera en esa dirección: $2x_2 = 12$ para mantener factibilidad.\n3.Obtener la intersección del nuevo conjunto de ecuaciones de frontera: $(0,6)$.\n```{margin}\nEn el ejemplo numérico se entenderá la frase \"solución FEV adyacente que es mejor\"\n```\nPrueba de optimalidad: revisar condición de optimalidad para $(0,6)$. Concluir que $(0,6)$ no es una solución óptima (existe una solución FEV adyacente que es mejor).\nIteración 2: moverse a una solución FEV adyacente mejor:\n1.De las dos aristas que salen de $(0,6)$ moverse a lo largo de la arista que aumenta el valor de $x_1$ (para que la $f_o$ continue mejorando no podemos ir hacia abajo pues esto implicaría disminuir el valor de $x_2$ y por tanto $f_o$).\n2.Detenerse al llegar a la primera ecuación de frontera en esa dirección: $3x_1+2x_2 = 12$ para manterner factibilidad.\n3.Obtener la intersección del nuevo conjunto de ecuaciones de frontera: $(2,6)$.\n```{margin}\nEn el ejemplo numérico se entenderá la frase \"ninguna solución FEV adyacente es mejor\"\n```\nPrueba de optimalidad: revisar condición de optimalidad para $(2,6)$. Concluir que $(2,6)$ es una solución óptima (ninguna solución FEV adyacente es mejor).\n(FORMAAUMENTADAPL)=\nForma aumentada de un PL\nEl método símplex inicia con un sistema de ecuaciones lineales con lado derecho igual a $b$ (que es el lado derecho de las restricciones funcionales) y una matriz del sistema con menos renglones que columnas. Asume que las entradas de $b$ son no negativas y que el rank de $A$ es completo.\nPara revisar los pasos del método símplex descritos anteriormente en esta sección continuaremos con el ejemplo prototipo de PL:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^2} 3x_1 + 5x_2$$\n$$\\text{sujeto a: }$$\n$$x_1 \\leq 4$$\n$$2x_2 \\leq 12$$\n$$3x_1 + 2x_2 \\leq 18$$\n$$x_1 \\geq 0, x_2 \\geq 0$$\nY vamos a nombrar a las desigualdades $x_1 \\leq4, 2x_2 \\leq 12, 3x_1 + 2x_2 \\leq 18$ restricciones funcionales y a las desigualdades $x_1 \\geq 0, x_2 \\geq 0$ restricciones de no negatividad.\n```{admonition} Observación\n:class: tip\nAunque hay diversas formas de PL en las que podríamos tener lados derechos negativos o desigualdades del tipo $\\geq$, es sencillo transformar de forma algebraica tales PL a una forma similar descrita en esta sección. \n```\nEn el ejemplo prototipo tenemos desigualdades por lo que se introducen variables de holgura, slack variables, no negativas para obtener la forma aumentada:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2$$\n$$\\text{sujeto a: }$$\n$$x_1 + x_3 = 4$$\n$$2x_2 + x_4 = 12$$\n$$3x_1 + 2x_2 + x_5 = 18$$\n$$x_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0$$\n```{margin}\nForma estándar de un PL:\n$$\n\\displaystyle \\min_{x \\in \\mathbb{R}^n} c^Tx\\\n\\text{sujeto a:} \\\nAx=b\\\nx \\geq 0\n$$\n```\n```{admonition} Comentarios\n\n\nLa forma aumentada que se obtuvo para el ejemplo prototipo no es la forma estándar de un PL salvo porque en la estándar se usa una minimización, ver {ref}forma estándar de un PL &lt;FORMAESTPL&gt;. Sin embargo, la forma estándar del PL se puede obtener considerando que maximizar $3x_1 + 5x_2$ sujeto a las restricciones dadas tiene mismo conjunto óptimo al problema minimizar $-3x_1 - 5 x_2$ sujeto a las mismas restricciones (los valores óptimos entre el problema de maximización y minimización son iguales salvo una multiplicación por $-1$).\n\n\nLas variables de holgura al iniciar el método tienen un coeficiente de $0$ en la función objetivo $f_o(x) = 3x_1 + 5x_2 = 3x_1 + 5x_2 + 0x_3 + 0x_4 + 0x_5$\n\n\n```\nY en notación matricial el sistema de ecuaciones lineales es:\n$$\nAx = \n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n\\left [\n\\begin{array}{c}\nx_1 \\\nx_2 \\\nx_3 \\\nx_4 \\\nx_5\n\\end{array}\n\\right ]\n=\n\\left[\n\\begin{array}{c}\n4 \\\n12 \\\n18\n\\end{array}\n\\right ]\n=\nb\n$$\n```{admonition} Observación\n:class: tip\nObsérvese que en la matriz de la forma aumentada se tiene una matriz identidad.\n```\n```{admonition} Comentario\nInterpretación de algunos valores de las variables en la forma aumentada:\nSi una variable de holgura es igual a $0$ en la solución actual, entonces esta solución se encuentra sobre la ecuación de frontera de la restricción funcional correspondiente. Un valor mayor que $0$ significa que la solución está en el lado factible de la ecuación de frontera, mientras que un valor menor que $0$ señala que está en el lado no factible de esta ecuación de frontera.\n```", "fig, ax = plt.subplots()\n\nax.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], label = \"_nolegend_\")\nax.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], label = \"_nolegend_\")\nax.plot(x_1, x_2, label = \"_nolegend_\")\n\nax.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\", label = \"_nolegend_\")\nx_1_region_2 = np.linspace(0,2, 100)\nax.fill_between(x_1_region_2, 0, 6, color=\"plum\", label = \"_nolegend_\")\n\nax.plot(array_FEV[:,0], array_FEV[:,1], 'o', color=\"orangered\", markersize=10, label=\"FEV\")\n\nax.plot(array_NFEV[:,0], array_NFEV[:,1], '*', color=\"darkmagenta\", markersize=10, label=\"NFEV\")\n\nax.legend()\n\nplt.show()", "```{admonition} Definiciones\nUna solución aumentada es una solución de las variables originales que se aumentó con los valores correspondientes de las variables de holgura.\nUna solución básica es una solución FEV o NFEV aumentada.\nUna solución básica factible (BF) es una solución FEV aumentada.\n```\n```{margin}\n$$\n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n\\left [\n\\begin{array}{c}\nx_1 \\\nx_2 \\\nx_3 \\\nx_4 \\\nx_5\n\\end{array}\n\\right ]\n=\n\\left[\n\\begin{array}{c}\n4 \\\n12 \\\n18\n\\end{array}\n\\right ]\n$$\n```\nEn el ejemplo prototipo:\n\n\n$\\left [ \\begin{array}{c} x_1 \\ x_2 \\end{array} \\right ] = \\left [ \\begin{array}{c} 3 \\ 2 \\end{array} \\right ]$ es solución (de hecho factible) y $\\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\end{array} \\right ] = \\left [ \\begin{array}{c} 3 \\ 2 \\ 1 \\ 8 \\ 5 \\end{array} \\right ]$ es solución aumentada (factible).\n\n\n$\\left [ \\begin{array}{c} x_1 \\ x_2 \\end{array} \\right ] = \\left [ \\begin{array}{c} 4 \\ 6 \\end{array} \\right ]$ es solución NFEV y $\\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\end{array} \\right ] = \\left [ \\begin{array}{c} 4 \\ 6 \\ 0 \\ 0 \\ -6 \\end{array} \\right ]$ es solución básica.\n\n\n$\\left [ \\begin{array}{c} x_1 \\ x_2 \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 6 \\end{array} \\right ]$ es solución FEV y $\\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 6 \\ 4 \\ 0 \\ 6 \\end{array} \\right ]$ es solución BF.\n\n\nSoluciones BF adyacentes\n```{admonition} Definición\nDos soluciones BF son adyacentes si sus correspondientes soluciones FEV lo son. \n```", "fig, ax = plt.subplots()\n\nax.plot(point1_point2_x_1[:,0], point1_point2_x_1[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_1[:,0], point3_point4_x_1[:,1], label = \"_nolegend_\")\nax.plot(point1_point2_x_2[:,0], point1_point2_x_2[:,1], label = \"_nolegend_\")\nax.plot(point3_point4_x_2[:,0], point3_point4_x_2[:,1], label = \"_nolegend_\")\nax.plot(x_1, x_2, label = \"_nolegend_\")\n\nax.fill_between(x_1_region_1, 0, x_2_region_1, where=x_2_region_1<=6, color=\"plum\", label = \"_nolegend_\")\nx_1_region_2 = np.linspace(0,2, 100)\nax.fill_between(x_1_region_2, 0, 6, color=\"plum\", label = \"_nolegend_\")\n\nax.plot(array_FEV[:,0], array_FEV[:,1], 'o', color=\"orangered\", markersize=10, label=\"FEV\")\n\nax.plot(array_NFEV[:,0], array_NFEV[:,1], '*', color=\"darkmagenta\", markersize=10, label=\"NFEV\")\n\nax.legend()\n\nplt.show()", "```{margin}\n$$\n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n\\left [\n\\begin{array}{c}\nx_1 \\\nx_2 \\\nx_3 \\\nx_4 \\\nx_5\n\\end{array}\n\\right ]\n=\n\\left[\n\\begin{array}{c}\n4 \\\n12 \\\n18\n\\end{array}\n\\right ]\n$$\n```\nEn el ejemplo prototipo $\\left [ \\begin{array}{c} 0 \\ 0 \\ 4 \\ 12 \\ 18 \\end{array} \\right ]$ y $\\left [ \\begin{array}{c} 0 \\ 6 \\ 4 \\ 0 \\ 6 \\end{array} \\right ]$ son soluciones BF adyacentes. \n(VARBASICASNOBASICAS)=\nVariables básicas y no básicas\n```{admonition} Definición\nDada la matriz $A \\in \\mathbb{R}^{m \\times n}$ de la forma aumentada aquellas variables de decisión que corresponden a columnas linealmente independientes se les nombra variables básicas. Las restantes son variables no básicas.\n```\nAl inicio del método símplex la matriz de la forma aumentada es:\n$$\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n$$\nPor lo que las variables básicas son $x_3, x_4, x_5$ y las no básicas son $x_1, x_2$.\n```{admonition} Definición\nLa matriz que se forma a partir de las columnas de $A$ que corresponden a las variables básicas se denota como $B \\in \\mathbb{R}^{m \\times m}$ es no singular y se nombra basis matrix. La matriz que se forma con las columnas de las variables no básicas se denota con $N$ y su nombre es nonbasis matrix.\n```\n```{margin}\n$$\n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n\\left [\n\\begin{array}{c}\nx_1 \\\nx_2 \\\nx_3 \\\nx_4 \\\nx_5\n\\end{array}\n\\right ]\n=\n\\left[\n\\begin{array}{c}\n4 \\\n12 \\\n18\n\\end{array}\n\\right ]\n$$\n```\nEn el ejemplo prototipo la basis matrix y la nonbasis matrix al inicio del método son:\n$$B\n=\\left [\n\\begin{array}{ccc}\n1 & 0 & 0 \\\n0 & 1 & 0 \\\n0 & 0 & 1 \\\n\\end{array}\n\\right ]\n$$\n$$N\n\\left [\n\\begin{array}{ccccc}\n1 & 0 \\\n0 & 2 \\\n3 & 2 \\\n\\end{array}\n\\right ]\n$$\n```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\n```{admonition} Comentarios\n\n\nObsérvese en el ejemplo prototipo que al tener 5 variables y tres ecuaciones si se le asigna un valor arbitrario a $x_1, x_2$ entonces quedan determinadas las variables $x_3, x_4, x_5$. En el método símplex las variables no básicas se igualan a $0$ por lo que se tiene: $\\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5\\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 0 \\ 4 \\ 12 \\ 18 \\end{array} \\right ]$\n\n\nUna forma de distinguir si dos soluciones BF son adyacentes es comparar qué variables no básicas (análogamente sus básicas) tienen. Si difieren en sólo una entonces son soluciones BF adyacentes. Por ejemplo: $\\left [ \\begin{array}{c} 0 \\ 0 \\ 4 \\ 12 \\ 18 \\end{array} \\right]$ y $\\left [ \\begin{array}{c} 0 \\ 6 \\ 4 \\ 0 \\ 6 \\end{array} \\right ]$ son BF adyacentes pues tienen como variables no básicas $x_1, x_2$ y $x_1, x_4$ respectivamente. Esto también se puede describir como: $x_2$ \"pasa de ser no básica a básica\" (análogamente $x_4$ pasa de básica a no básica). Lo anterior ayuda a identificar soluciones BF adyacentes en PL's con más de dos variables en los que resulta más complicado graficar.\n\n\nEl método símplex al considerar variables no básicas con valor de $0$ indica que la restricción no negativa $x_j \\geq 0$ es activa para $j$ en los índices de las variables no básicas.\n\n\nEn el método símplex se puede verificar que una solución es BF si las variables básicas son no negativas (recuérdese que las no básicas en el método son igualadas a cero).\n\n\n```\nVariables básicas no degeneradas y degeneradas\nConsiderando un problema con $n$ variables al que se le añadieron $m$ variables de holgura denotemos a $\\mathcal{B}$ como el conjunto de índices en el conjunto ${1, 2, \\dots, m+n}$ que representan a las variables básicas y $\\mathcal{N}$ al conjunto de índices de las no básicas. \nEl ejemplo prototipo en su forma aumentada $\\mathcal{B} = {3, 4, 5}$, $\\mathcal{N} = {1, 2}$ con $m=3, n=2, m+n=5$. El método símplex en sus iteraciones elige algún índice de $\\mathcal{N}$ y lo sustituye por un índice del conjunto $\\mathcal{B}$.\n```{margin}\nEn el ejemplo numérico se entenderá la frase \"mejoren la función objetivo $f_o$\".\n```\n```{admonition} Comentarios\n\n\nEl quitar y añadir variables a los conjuntos de índices $\\mathcal{N}, \\mathcal{B}$ y realizar los ajustes necesarios (recalcular valores de las variables básicas) en los valores de todas las variables básicas y no básicas se le conoce con el nombre de pivoteo.\n\n\nLa interpretación geométrica de quitar, añadir variables de las matrices $N, B$ y realizar los ajustes necesarios (recalcular valores de las variables básicas) en una solución BF es equivalente en dos dimensiones a moverse por una arista y detenerse hasta encontrar una solución FEV.\n\n\nLa elección de cuál variable no básica sustituir por una variable básica depende de la existencia de soluciones BF que mejoren la función objetivo $f_o$ y para ello se utiliza un criterio de optimalidad.\n\n\n```\nEn el método símplex al recalcular los valores de las variables básicas algunas pueden tener valor igual a cero lo que da lugar a la siguiente definición.\n```{admonition} Definición\nUna solución BF para un PL con restricciones de no negatividad en la que todas sus variables básicas son positivas se nombra no degenerada y degenerada si existe al menos una con valor igual a cero.\n```\n(EJMETSIMPLEXAPLICADOEJPROTOTIPO)=\nEjemplo del método símplex aplicado al ejemplo prototipo\n```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\nContinuemos con el ejemplo prototipo en su forma aumentada. En notación matricial el sistema de ecuaciones lineales es:\n$$\nAx = \n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n\\left [\n\\begin{array}{c}\nx_1 \\\nx_2 \\\nx_3 \\\nx_4 \\\nx_5\n\\end{array}\n\\right ]\n=\n\\left[\n\\begin{array}{c}\n4 \\\n12 \\\n18\n\\end{array}\n\\right ]\n=\nb\n$$\nDefínanse al vector $x$ que contiene las variables \"originales\" y a las de holgura: $x = \\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\end{array} \\right ]$ y $c = \\left [ \\begin{array}{c} -3 \\ -5 \\ 0 \\ 0 \\ 0 \\end{array}\\right]$ al vector de costos unitarios o equivalentemente $-c$ el vector de ganancias unitarias. Así, la función objetivo es: $f_o(x) = (-c)^Tx$ y se busca maximizar la ganancia total. También defínanse a los vectores de variables básicas y no básicas como: $x_B = [x_j]{j \\in \\mathcal{B}}$, $x_N = [x_j]{j \\in \\mathcal{N}}$.\n```{margin}\nSiendo rigurosos la forma estándar de un PL es:\n$$\n\\displaystyle \\min_{x \\in \\mathbb{R}^n} c^Tx\\\n\\text{sujeto a:} \\\nAx=b\\\nx \\geq 0\n$$\npor lo que aunque maximizar $(-c)^Tx$ sujeto a las restricciones dadas tiene el mismo conjunto óptimo que el problema de minimizar $c^Tx$ sujeto a las mismas restricciones (los valores óptimos entre el problema de maximización y minimización son iguales salvo una multiplicación por $-1$), el problema debe escribirse explícitamente como minimización para considerarse en forma estándar.\n```\nEntonces el PL con esta notación que se debe resolver es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} (-c)^Tx$$\n$$\\text{sujeto a: }$$\n$$Ax = b$$\n$$x \\geq 0$$\ncon $x=\\left [ \\begin{array}{c} x_B \\ x_N\\end{array} \\right ] \\in \\mathbb{R}^5$, $x_B \\in \\mathbb{R}^{3}, x_N \\in \\mathbb{R}^2$, $A \\in \\mathbb{R}^{3 \\times 5}$.\nPaso inicial del ejemplo prototipo\n```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\nSe tiene la siguiente situación: \n$$(-c)^Tx= 3x_1 + 5x_2 + 0x_3 + 0x_4 + 0x_5$$\n$$\nA = \n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n$$\nComo $A = [ N \\quad B ]$, $x=\\left [ \\begin{array}{c} x_N \\ x_B\\end{array} \\right ]$ y $Ax = b$ entonces $Ax = B x_B + N x_N = b$. \nSe designa $x_N$ como un vector de ceros:\n$$x_N = \\left [ \\begin{array}{c} x_1 \\ x_2 \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 0 \\end{array} \\right ]$$\nPor tanto:\n$$Ax = Bx_B + N x_N = B x_B = b$$\ny se tiene:\n$$x_B = B^{-1}b.$$\nEn este paso inicial para el ejemplo prototipo $x_B = b$ pues $B$ es la identidad:\n$$\\therefore x_B = \\left [ \\begin{array}{c} x_3 \\ x_4 \\ x_5\\end{array}\\right ] = B^{-1}b = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\end{array} \\right ]^{-1} \\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ]=\\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ]$$\nEl vector de costos $c$ lo dividimos en $c = \\left [ \\begin{array}{c} c_N\\ c_B \\end{array} \\right ]$, con $c_B = \\left [ \\begin{array}{c} c_{B_3} \\ c_{B_4} \\ c_{B_5} \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 0 \\ 0 \\end{array} \\right ]$ contiene los costos de las variables básicas. El vector $c_N=\\left [ \\begin{array}{c} c_{N_1} \\ c_{N_2} \\end{array} \\right ]=\\left [ \\begin{array}{c}-3 \\ -5 \\end{array} \\right ]$ contiene los costos de las variables no básicas.\nLas variables básicas son $x_3, x_4, x_5$ y las no básicas son $x_1, x_2$.\n```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\n```{admonition} Comentario\nLa solución BF en el paso inicial $x = \\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 0 \\ 4 \\ 12 \\ 18 \\end{array} \\right ]$ tiene como variables no básicas $x_1, x_2$ e indican que las restricciones $x_1 \\geq 0, x_2 \\geq 0$ son restricciones activas.\n```", "B = np.eye(3)\nb = np.array([4, 12, 18])\nx_B = b\nA = np.array([[1, 0, 1, 0, 0],\n [0, 2, 0, 1, 0],\n [3, 2, 0, 0, 1]])\nc_B = np.array([0,0,0])\nc_N = np.array([-3, -5])\n\n#list of indexes of nonbasic variables correspond to x1, x2\nN_list_idx = [0, 1]\n#list of indexes of basic variables correspond to x3, x4, x5\nB_list_idx = [2, 3, 4] \n", "Prueba de optimalidad\nPara revisar tanto en el paso inicial como en las iteraciones posteriores la condición de optimalidad para encontrar soluciones FEV adyacentes mejores, realicemos algunas reescrituras de la función objetivo.\n1.Obsérvese que la función objetivo se puede escribir como:\n```{margin}\nRecuérdese que la función objetivo es: $(-c)^Tx= 3x_1 + 5x_2 + 0x_3 + 0x_4 + 0x_5$.\n```\n$$f_o(x) = (-c)^Tx = [-c_B \\quad -c_N] ^T \\left [ \\begin{array}{c} x_B \\ x_N\\end{array} \\right ] = -c_B^Tx_B - c_N^T x_N = -c_B^T B^{-1}b = [0 \\quad 0 \\quad 0]^T \\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ]=0$$\n2.Obsérvese que las restricciones en su forma igualadas a cero pueden ser restadas de la función objetivo sin modificar su valor. Por ejemplo si tomamos la primer restricción con lado derecho igual a cero: $x_1 + x_3 - 4 = 0$ entonces:\n$$f_o(x) = f_o(x) - 0 = f_o(x) - (x_1 + x_3 - 4) = f_o(x) -x_1 -x_3 + 4$$\n```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\nY esto podemos hacer para todas las restricciones con lado derecho igual a cero:\n$$\n\\begin{eqnarray}\nf_o(x) &=& f_o(x) - (x_1 + x_3 - 4) - (2x_2 + x_4 - 12) - (3x_1 + 2x_2 + x_5 - 18) \\nonumber \\\n&=& f_o(x) + (-4x_1 - 4x_2) + (-x_3 - x_4 - x_5) + (4 + 12 + 18) \\nonumber \\\n&=& f_o(x) - 4 \\displaystyle \\sum_{j \\in \\mathcal{B}} x_{B_j} - \\sum_{j \\in \\mathcal{N}}x_{N_j} + \\sum_{i = 1}^3 b(i)\n\\end{eqnarray}\n$$\ncon $x_{B_j}$ $j$-ésima componente del vector $x_B$, $x_{N_j}$ $j$-ésima componente del vector $x_N$ y $b(i)$ $i$-ésima componente de $b$.\n```{margin}\nNo es coincidencia que se elijan las cantidades $\\lambda, \\nu$ para representar esta igualdad, ver {ref}la función Lagrangiana &lt;FUNLAGRANGIANA&gt;.\n```\nEn el método símplex no solamente se restan de la $f_o$ las restricciones con lado derecho igual a cero sino se multiplican por una cantidad $\\nu_i$ y se suman a $f_o$. También si $\\lambda$ es un vector tal que $\\lambda ^T x = 0$ entonces:\n$$f_o(x) = f_o(x) + \\lambda^Tx + \\sum_{i = 1}^3 \\nu_i h_i(x) = f_o(x) + \\displaystyle \\sum_{j \\in \\mathcal{B}} \\lambda_{B_j} x_{B_j} + \\sum_{j \\in \\mathcal{N}}\\lambda_{N_j}x_{N_j} + \\sum_{i = 1}^3 \\nu_i h_i(x)$$\ncon $\\lambda_{B_j}$, $\\lambda_{N_j}$ coeficientes asociados a $x_{B_j}$ y $x_{N_j}$ respectivamente y $h_i(x)$ $i$-ésima restricción de igualdad con lado derecho igual a cero.\nLos coeficientes $\\lambda_{B_j}, \\lambda_{N_j}$ de la expresión anterior en el método de símplex se escriben como:\n$$\\lambda_{B_j} = -c_{B_j} + \\nu^Ta_j \\quad j \\in \\mathcal{B}$$\n$$\\lambda_{N_j} = -c_{N_j} + \\nu^Ta_j \\quad j \\in \\mathcal{N}$$\ncon $a_j$ $j$-ésima columna de $A \\in \\mathbb{R}^{3 \\times 5}$ y $\\nu \\in \\mathbb{R}^{3}$.\nEn el método símplex se mantiene en cada iteración $\\lambda_{B_j} = 0 \\forall j \\in \\mathcal{B}$ y se busca que $\\lambda_{N_j} \\forall j \\in \\mathcal{N}$ sea no negativo para problemas de minimización o no positivo para problemas de maximización. Si la búsqueda anterior no se logra, se continúa iterando hasta llegar a una solución o mostrar un mensaje si no fue posible encontrar una solución. Por lo anterior el vector $\\nu$ se obtiene resolviendo la ecuación: $\\nu ^T B = c_B^T$ y por tanto $\\nu = B^{-T} c_B $.\n```{admonition} Comentarios\n\n\nLa justificación del por qué $\\lambda = -c + A^T \\nu$ se realizará más adelante, por lo pronto considérese que esto se obtiene de las {ref}condiciones KKT de optimalidad &lt;PRIMERAFORMULACIONCONDKKT&gt;.\n\n\nPor la definición de $\\nu$ se cumple: $f_o(x) = (-c)^Tx = -c_B^Tx_B - c_N^T x_N = -c_B^T B^{-1}b = - \\nu^Tb = b^T(-\\nu)$.\n\n\nNo se recomienda aprenderse fórmulas o expresiones pues este problema se planteó como maximizar $(-c)^Tx$, si se hubiera elegido maximizar $c^Tx$ (sin signo negativo) se modificarían un poco las expresiones anteriores para $\\lambda, \\nu, f_o(x)$.\n\n\n```\nLa prueba de optimalidad consiste en revisar los $\\lambda_{N_j}$, $j \\in \\mathcal{N}$. Se selecciona(n) aquella(s) variable(s) no básica(s) que tenga(n) la tasa más alta de mejoramiento (esto depende si es un problema de maximización o minimización) del valor en la función objetivo.\n```{margin}\n$c_B = \\left [ \\begin{array}{c} c_{B_3} \\ c_{B_4} \\ c_{B_5} \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 0 \\ 0 \\end{array} \\right ]$\n```\nEl vector $\\nu$ es:\n$$\\nu = B^{-T}c_B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\end{array} \\right ] ^{-T} \\left [ \\begin{array}{c} 0 \\ 0 \\ 0 \\end{array}\\right ] = \\left [ \\begin{array}{c} 0 \\ 0 \\ 0 \\end{array}\\right ]$$\n```{margin}\nResolviendo un sólo sistema de ecuaciones lineales nos ayuda a evitar calcular la inversa de una matriz que implica resolver un sistema de ecuaciones lineales más grande.\n```\nPara el cálculo de $\\nu$ resolvemos el sistema de ecuaciones lineales para el vector de incógnitas $\\nu$: \n$$B^T \\nu = c_B$$\nComo $B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\end{array} \\right ]$ entonces directamente $\\nu = c_B$. Por tanto:", "nu = np.array([0, 0, 0])", "Valor de la función objetivo en la solución BF actual: $f_o(x) = (-c)^Tx = b^T(-\\nu) = 0$\n```{margin}\n$\n\\begin{eqnarray}\nf_o(x) &=& (-c)^Tx \\nonumber\\\n&=& -c_B^Tx_B - c_N^T x_N \\nonumber\\\n&=& -c_B^T x_B \\quad \\text{pues } x_N=0\\\n\\end{eqnarray}$ \n```", "print(np.dot(-c_B, x_B))", "```{margin}\n$f_o(x) = b^T(-\\nu)$.\n```", "print(np.dot(b, -nu))", "```{margin}\n$c_N= \\left [ \\begin{array}{c}-3 \\ -5 \\end{array} \\right ]$\n```", "lambda_N_1 = -c_N[0] + np.dot(nu, A[:, N_list_idx[0]])\n\nlambda_N_2 = -c_N[1] + np.dot(nu, A[:, N_list_idx[1]])\n\nprint(lambda_N_1)\n\nprint(lambda_N_2)", "$\\lambda_{N_1} = -c_{N_1} + \\nu^Ta_1 = 3 + [0 \\quad 0 \\quad 0] \\left [ \\begin{array}{c} 1 \\ 0 \\ 3 \\end{array}\\right ] = 3$\n$\\lambda_{N_2} = -c_{N_2} + \\nu^Ta_2 = 5 + [0 \\quad 0 \\quad 0] \\left [ \\begin{array}{c} 0 \\ 2 \\ 2 \\end{array}\\right ] = 5$\n```{margin}\n\"tasa más alta de mejoramiento\" se refiere a mejorar $f_o$ por un incremento de una unidad en la variable $x_2$.\n```\n```{margin}\nIncrementar una variable no básica equivale geométricamente a moverse por una arista desde una solución FEV.\n```\nComo tenemos un problema de maximización la tasa más alta de mejoramiento de $f_o$ la da la variable $x_2$ por lo que es la variable no básica que sustituye a una variable básica. ¿Cuál variable básica se debe elegir?", "#index for nonbasic variables, in this case value 1 correspond to x2\n\nidx_x_N = 1", "Prueba del cociente mínimo\nEl objetivo de esta prueba es determinar qué variable(s) básica(s) llega(n) a cero cuando crece la variable entrante. Tal variable(s) básica(s) en la siguiente iteración será no básica y la que aumenta pasa de ser no básica a básica.\nEn el paso inicial las variables básicas son $x_3$, $x_4$ y $x_5$, por lo que hay que determinar de éstas cuál(es) es(son) la(s) que sale(n) al incrementar la variable no básica $x_2$.\nEsta prueba del cociente mínimo primero se explicará de forma detallada para posteriormente representarla de forma matricial.\nLas ecuaciones de $Ax = b$ son:\n```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\n$$\\begin{eqnarray}\nx_1 + x_3 &=& 4 \\nonumber \\\n2x_2 + x_4 &=& 12 \\nonumber \\\n3x_1 + 2x_2 + x_5 &=& 18\n\\end{eqnarray}\n$$\nDespejamos cada variable básica de las ecuaciones anteriores:\n$$\\begin{eqnarray}\nx_3 &=& 4 - x_1 \\nonumber \\\nx_4 &=& 12 - 2x_2 \\nonumber \\\nx_5 &=& 18 - 3x_1 - 2x_2\n\\end{eqnarray}\n$$\nY se debe cumplir por las restricciones de no negatividad que al aumentar $x_2$:\n$$\\begin{eqnarray}\nx_3 &=& 4 - x_1 \\geq 0 \\nonumber \\\nx_4 &=& 12 - 2x_2 \\geq 0 \\nonumber \\\nx_5 &=& 18 - 3x_1 - 2x_2 \\geq 0\n\\end{eqnarray}\n$$\nEn la primera ecuación $x_3 = 4 - x_1$ no tenemos contribución alguna de $x_2$ por lo que no la tomamos en cuenta. La segunda y tercer ecuación sí aparece $x_2$ y como $x_1$ es variable no básica con valor $0$ se debe cumplir:\n$$\\begin{eqnarray}\nx_2 \\leq \\frac{12}{2} = 6 \\nonumber \\\nx_2 \\leq \\frac{18}{2} = 9\n\\end{eqnarray}\n$$\nEntonces se toma el mínimo de las cantidades anteriores y como es igual a $6$ y esa desigualdad la obtuvimos de despejar $x_4$ entonces se elige $x_4$ como variable básica que se vuelve no básica. Tomamos el mínimo pues el valor de $6$ es lo máximo que podemos incrementar $x_2$ sin que $x_4$ se haga no negativa.\n```{admonition} Comentario\nEs importante en la prueba del cociente mínimo que el lado derecho, el vector $b$, tenga entradas no negativas.\n```\nPrueba del cociente mínimo: forma general y con notación matricial y vectorial\nEl procedimiento de la prueba del cociente mínimo entonces consiste en: \n1.Elegir la columna de $A$ correspondiente a la variable no básica que sustituye a la variable básica, que por lo anterior es $x_2$ y corresponde a la segunda columna de $A$, $a_2$:\n$$\nA = \n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n$$\n2.Hacer la multiplicación $d = B^{-1}a_2$ (ver siguiente paso para el cálculo de $d$). \nEsto en general se realiza, en el paso inicial para el ejemplo prototipo $B^{-1}$ es la identidad por lo que en este paso inicial no tuvo efecto hacer la multiplicación. \n3.Para las entradas estrictamente positivas de tal multiplicación anterior se divide el lado derecho entre tales entradas y se toma el mínimo. Como el lado derecho en cada iteración es $x_B = B^{-1}b$ entonces se dividen los valores de las variables básicas entre las entradas estrictamente positivas. Esto es, si se denota como $x_2^{+}$ al mínimo:\n$$x_2^{+} = \\min {\\frac{x_{B_i}}{d_i} : d_i > 0, i = 1, 2, \\dots, m }$$\ncon $d_i$ $i$-ésima componente del vector $d$ que es solución del sistema de ecuaciones: $Bd = a_2$ y $x_{B_i}$ $i$-ésima entrada del vector $x_B$ de la iteración actual. \n4.El índice donde ocurre el mínimo es el de la variable básica que será sustituida. \nEn el ejemplo prototipo se tienen las siguientes asignaciones en el paso inicial:\n{margin}\n$\nB = \n\\left [\n\\begin{array}{ccccc}\n1 &amp; 0 &amp; 0 \\\\\n0 &amp; 1 &amp; 0 \\\\\n0 &amp; 0 &amp; 1 \\\\\n\\end{array}\n\\right ]\n$\n```{margin}\nLa segunda columna de $A$ se elige pues $x_2$ es la variable no básica a la que se le aumentará su valor y sustituirá a una variable básica.\n```\nSe resuelve la ecuación: $Bd = a_2$ para $d$ vector de incógnitas y $a_2$ segunda columna de $A$.", "d = np.linalg.solve(B, A[:,idx_x_N])\n\nprint(d)", "En esta iteración: $x_B = \\left [ \\begin{array}{c} x_3 \\ x_4 \\ x_5\\end{array}\\right ] = \\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ]$ pues $B$ es la matriz identidad por lo que $x_B = b$.", "print(x_B)", "```{margin}\nSe hace la división únicamente entre las entradas estrictamente positivas\n```", "idx_positive = d >0\n\nprint(x_B[idx_positive]/d[idx_positive])", "```{margin}\nHacer cero una variable básica y convertirla en no básica equivale geométricamente a detener el movimiento por una arista hasta encontrar una solución FEV.\n```\nEntonces el mínimo ocurre en la segunda posición de $x_B$ que corresponde a la variable básica $x_4$. Se elige $x_4$ como variable básica que se vuelve no básica. $x_4$ será sustituida por $x_2$ en la próxima iteración.", "#index for basic variables, in this case value 1 correspond to x4\n\nidx_x_B = 1", "Actualización del vector $x_B$\nLa actualización de las variables básicas después del paso inicial se realiza con la expresión computacional:\n$$x_B = x_B - dx_{nb}^{+}$$\ndonde: $nb$ es el índice de la variable no básica que se volverá básica en la iteración actual. El superíndice $+$ se utiliza para señalar que se actualizará tal variable no básica (variable que entra). Después de incrementarla, la variable básica con índice $ba$ pasa a estar en $x_N$ con valor de $0$ (variable que sale).\nPosterior a la actualización de $x_B$ se intercambian las columnas de $B$ correspondientes a las variables $x_{nb}$ y $x_{ba}$.\nLa justificación de la expresión anterior es la siguiente:\nComo $Ax = b$ y $Bx_B + Nx_N = b$ pero será incrementada la variable $x_{nb}$ y disminuida $x_{ba}$ a cero entonces si $x^+$ denota el nuevo valor de $x$ se tiene:\n$$b = Ax ^+ = Bx_B^+ + a_{nb}x_{nb}^+ = B x_B = Ax = b$$\nrecordando que $Nx_N = 0$ pues $x_N=0$ donde: $a_{nb}$ es la ${nb}$-ésima columna de $A$.\nPor tanto:\n$$Bx_B^+ = Bx_B - a_{nb}x_{nb}^+$$\nY premultiplicando por la inversa:\n$$x_B^+ = x_B - B^{-1}a_{nb}x_{nb}^+ = x_B - d x_{nb}^+.$$\nLa interpretación geométrica de esta actualización es: un movimiento a lo largo de una arista del poliedro que mantiene factibilidad y mejora la función objetivo. Nos movemos a lo largo de esta arista hasta encontrar una solución FEV. En esta nueva solución FEV una nueva restricción no negativa se vuelve activa, la que corresponde a la variable $x_{ba}$. \nDespués de la actualización se remueve el índice $ba$ del conjunto $\\mathcal{B}$ y se sustituye por el índice $nb$.\nIteración 1\nLa matriz $B$ del paso inicial era:\n$$B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\end{array} \\right ]$$\ny correspondía cada columna a las variables $x_3, x_4, x_5$ en ese orden.\nSe realiza la actualización descrita para $x_B$:\n$$x_B = x_B - dx_2^{+}$$\ncon $x_2$ es la variable no básica que se volverá básica en la iteración actual.", "x_2_plus = np.min(x_B[idx_positive]/d[idx_positive])\n\nprint(x_2_plus)\n\nx_B = x_B - d*x_2_plus\n\nprint(x_B)", "Aquí el valor de la variable $x_4$ se hace cero y tenemos que intercambiar tal entrada con la de $x_2^+$ para el vector $x_B$:", "x_B[idx_x_B] = x_2_plus\nprint(x_B)", "```{margin}\nAntes de hacer el intercambio de columnas: $B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\end{array} \\right ]$ y la matriz original $A = \n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 2 & 0 & 1 & 0 \\\n3 & 2 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n$\n```\nComo $x_4$ se intercambia por $x_2$ entonces se intercambia la columna $2$ de $A$, $a_2$, por la $2$ de $B$, $b_2$ por lo que al final de la iteración 1:\n$$B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 2 & 1 \\end{array} \\right ]$$", "B[:,idx_x_B] = A[:,idx_x_N]", "$x_B = \\left [ \\begin{array}{c} x_3 \\ x_2 \\ x_5 \\end{array}\\right ] = \\left [ \\begin{array}{c} 4 \\ 6 \\ 6 \\end{array}\\right ]$, $x_N = \\left [ \\begin{array}{c} x_1 \\ x_4\\end{array}\\right ] = \\left [ \\begin{array}{c} 0 \\ 0\\end{array}\\right ]$.", "aux = B_list_idx[idx_x_B]\nB_list_idx[idx_x_B] = N_list_idx[idx_x_N]\nN_list_idx[idx_x_N] = aux", "```{admonition} Observación\n:class: tip\nLa actualización de $x_B$ anterior se puede verificar que es equivalente a:\n$$x_B = \\left [ \\begin{array}{c} x_3 \\ x_2 \\ x_5\\end{array}\\right ] = B^{-1}b = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 2 & 1 \\end{array} \\right ]^{-1} \\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ] = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & \\frac{1}{2} & 0 \\ 0 & -1 & 1 \\end{array} \\right ]\\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ] = \\left [ \\begin{array}{c} 4 \\ 6 \\ 6 \\end{array}\\right ] $$\n```\n```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\n```{admonition} Comentario\nEn este punto, la solución BF obtenida en esta iteración $x = \\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ 6 \\ 4 \\ 0 \\ 6 \\end{array} \\right ]$ tiene como variables no básicas $x_1, x_4$ e indican que las restricciones $x_1 \\geq 0, x_4 \\geq 0$ son restricciones activas. Además como $x_4$ es variable de holgura la restricción funcional asociada $2x_2 + x_4 \\leq 12$ indica que la solución BF se encuentra sobre la ecuación de frontera.\n```\nPrueba de optimalidad\nSe recalcula el vector $\\nu$ considerando que $c_B = \\left [ \\begin{array}{c} c_{B_3} \\ c_{B_2} \\ c_{B_5} \\end{array}\\right ]$ tomando ahora $x_3, x_2, x_5$ como variables básicas.\n```{margin}\n$B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 2 & 1 \\end{array} \\right ]$\n```\n```{margin}\n$c_B = \\left [ \\begin{array}{c} c_{B_3} \\ c_{B_2} \\ c_{B_5} \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ -5 \\ 0 \\end{array} \\right ]$\n```\nEl vector $\\nu$ es:\n$$\\nu = B^{-T}c_B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & \\frac{1}{2} & 0 \\ 0 & -1 & 1 \\end{array} \\right ] ^T \\left [ \\begin{array}{c} 0 \\ -5 \\ 0 \\end{array}\\right ] = \\left [ \\begin{array}{c} 0 \\ -\\frac{5}{2} \\ 0 \\end{array}\\right ]$$\n```{margin}\nResolviendo un sólo sistema de ecuaciones lineales nos ayuda a evitar calcular la inversa de una matriz que implica resolver un sistema de ecuaciones lineales más grande.\n```\nPara el cálculo de $\\nu$ resolvemos el sistema de ecuaciones lineales para el vector de incógnitas $\\nu$: \n$$B^T \\nu = c_B$$", "aux = c_B[idx_x_B]\nc_B[idx_x_B] = c_N[idx_x_N]\nc_N[idx_x_N] = aux\n\nnu = np.linalg.solve(B.T, c_B)\n\nprint(nu)", "Por tanto:\n```{margin}\n$c_N= \\left [ \\begin{array}{c}-3 \\ 0 \\end{array} \\right ]$\n```", "lambda_N_1 = -c_N[0] + np.dot(nu, A[:,N_list_idx[0]])\n\nlambda_N_4 = -c_N[1] + np.dot(nu, A[:,N_list_idx[1]])\n\nprint(lambda_N_1)\n\nprint(lambda_N_4)", "$\\lambda_{N_1} = -c_{N_1} + \\nu^Ta_1 = 3 + [0 \\quad -\\frac{5}{2} \\quad 0] \\left [ \\begin{array}{c} 1 \\ 0 \\ 3 \\end{array}\\right ] = 3$\n$\\lambda_{N_4} = -c_{N_4} + \\nu^Ta_4 = 0 + [0 \\quad -\\frac{5}{2} \\quad 0]\\left [ \\begin{array}{c} 0 \\ 1 \\ 0 \\end{array}\\right ] = -2.5$\nComo tenemos un problema de maximización la tasa más alta de mejoramiento de $f_o$ la da la variable $x_1$ por lo que es la variable no básica que sustituye a una variable básica.", "#index for nonbasic variables, in this case value 0 correspond to x1\n\nidx_x_N = 0", "Valor de la función objetivo en la solución BF actual: $f_o(x) = (-c)^Tx = b^T(-\\nu) = 30$\n```{margin}\n$\n\\begin{eqnarray}\nf_o(x) &=& (-c)^Tx \\nonumber\\\n&=& -c_B^Tx_B - c_N^T x_N \\nonumber\\\n&=& -c_B^T x_B \\quad \\text{pues } x_N=0\\\n\\end{eqnarray}$ \n```", "print(np.dot(-c_B, x_B ))", "```{margin}\n$f_o(x) = b^T(-\\nu)$.\n```", "print(np.dot(b, -nu))", "Prueba del cociente mínimo\n```{margin}\n$B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 2 & 1 \\end{array} \\right ]$\n```\n```{margin}\nLa primer columna de $A$ se elige pues $x_1$ es la variable no básica a la que se le aumentará su valor y sustituirá a una variable básica.\n```\nSe resuelve la ecuación: $Bd = a_1$ para $d$ vector de incógnitas y $a_1$ primera columna de $A$.", "d = np.linalg.solve(B, A[:, idx_x_N])\n\nprint(d)", "En esta iteración: $x_B = \\left [ \\begin{array}{c} x_3 \\ x_2 \\ x_5\\end{array}\\right ] = \\left [ \\begin{array}{c} 4 \\ 6 \\ 6 \\end{array}\\right ]$.", "print(x_B)", "```{margin}\nSe hace la división únicamente entre las entradas estrictamente positivas\n```\n$$x_1^{+} = \\min {\\frac{x_{B_i}}{d_i} : d_i > 0, i = 1, 2, \\dots, m }$$", "idx_positive = d >0\n\nprint(x_B[idx_positive]/d[idx_positive])", "Entonces el mínimo ocurre en la tercera posición de $x_B$ que corresponde a la variable básica $x_5$. Se elige $x_5$ como variable básica que se vuelve no básica. $x_5$ será sustituida por $x_1$ en la próxima iteración.", "#index for basic variables, in this case value 2 correspond to x5\n\nidx_x_B = 2", "Iteración 2\nLa matriz $B$ de la iteración anterior era:\n$$B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 2 & 1 \\end{array} \\right ]$$\ny correspondía cada columna a las variables $x_3, x_2, x_5$ en ese orden.\nSe realiza la actualización descrita para $x_B$:\n$$x_B = x_B - dx_1^{+}$$\ncon $x_1$ es la variable no básica que se volverá básica en la iteración actual.", "x_1_plus = np.min(x_B[idx_positive]/d[idx_positive])\n\nprint(x_1_plus)\n\nx_B = x_B - d*x_1_plus\n\nprint(x_B)", "Aquí el valor de la variable $x_5$ se hace cero y tenemos que intercambiar tal entrada con la de $x_1^+$ para el vector $x_B$:", "x_B[idx_x_B] = x_1_plus\nprint(x_B)", "```{admonition} Observación\n:class: tip\nLa actualización de $x_B$ anterior se puede verificar que es equivalente a:\n$$x_B = \\left [ \\begin{array}{c} x_3 \\ x_2 \\ x_1\\end{array}\\right ] = B^{-1}b = \\left [ \\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 2 & 0 \\ 0 & 2 & 3 \\end{array} \\right ]^{-1} \\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ] = \\left [ \\begin{array}{ccc} 1 & \\frac{1}{3} & -\\frac{1}{3} \\ 0 & \\frac{1}{2} & 0 \\ 0 & -\\frac{1}{3} & \\frac{1}{3} \\end{array} \\right ]\\left [ \\begin{array}{c} 4 \\ 12 \\ 18 \\end{array}\\right ] = \\left [ \\begin{array}{c} 2 \\ 6 \\ 2 \\end{array}\\right ]$$\n```\n```{margin}\nAntes de hacer el intercambio de columnas: $B = \\left [ \\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 2 & 1 \\end{array} \\right ]$ y la matriz original $ A = \n\\left [\n\\begin{array}{ccccc}\n1 & 0 & 1 & 0 & 0 \\\n0 & 1 & 0 & 1 & 0 \\\n3 & 0 & 0 & 0 & 1 \\\n\\end{array}\n\\right ]\n$\n```\nComo $x_5$ se intercambia por $x_1$ entonces se intercambia la columna $1$ de $A$, $a_1$, por la $3$ de $B$, $b_3$ por lo que al final de la iteración $2$:\n$$B = \\left [ \\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 2 & 0 \\ 0 & 2 & 3 \\end{array} \\right ]$$", "B[:,idx_x_B] = A[:,idx_x_N]", "$x_B = \\left [ \\begin{array}{c} x_3 \\ x_2 \\ x_1 \\end{array}\\right ] = \\left [ \\begin{array}{c} 2 \\ 6 \\ 2 \\end{array}\\right ]$, $x_N = \\left [ \\begin{array}{c} x_5 \\ x_4\\end{array}\\right ] = \\left [ \\begin{array}{c} 0 \\ 0\\end{array}\\right ]$.", "aux = B_list_idx[idx_x_B]\nB_list_idx[idx_x_B] = N_list_idx[idx_x_N]\nN_list_idx[idx_x_N] = aux", "```{margin}\nLa forma aumentada recuérdese es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^5} 3x_1 + 5x_2 \\\n\\text{sujeto a: }\\\nx_1 + x_3 = 4 \\\n2x_2 + x_4 = 12 \\\n3x_1 + 2x_2 + x_5 = 18 \\\nx_1 \\geq 0, x_2 \\geq 0, x_3 \\geq 0, x_4 \\geq 0, x_5 \\geq 0\n$$\n```\n```{admonition} Comentario\nEn este punto, la solución BF obtenida en esta iteración $x = \\left [ \\begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\end{array} \\right ] = \\left [ \\begin{array}{c} 2 \\ 6 \\ 2 \\ 0 \\ 0 \\end{array} \\right ]$ tiene como variables no básicas $x_4, x_5$ e indican que las restricciones $x_4 \\geq 0, x_5 \\geq 0$ son restricciones activas. Además como $x_4$ y $x_5$ son variables de holgura las restricciones funcionales asociadas $2x_2 + x_4 \\leq 12$, $3x_1 + 2x_2 + x_5 \\leq 18$ indican que la solución BF se encuentra sobre sus ecuaciones de frontera respectivas.\n```\nPrueba de optimalidad\nSe recalcula el vector $\\nu$ considerando que $c_B = \\left [ \\begin{array}{c} c_{B_3} \\ c_{B_2} \\ c_{B_1} \\end{array}\\right ]$ tomando ahora $x_1$ como básica.\n```{margin}\n$B = \\left [ \\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 2 & 0 \\ 0 & 2 & 3 \\end{array} \\right ]$\n```\n```{margin}\n$c_B = \\left [ \\begin{array}{c} c_{B_3} \\ c_{B_2} \\ c_{B_1} \\end{array} \\right ] = \\left [ \\begin{array}{c} 0 \\ -5 \\ -3 \\end{array} \\right ]$\n```\nEl vector $\\nu$ es:\n$$\\nu = B^{-T}c_B = \\left [ \\begin{array}{ccc} 1 & \\frac{1}{3} & -\\frac{1}{3} \\ 0 & \\frac{1}{2} & 0 \\ 0 & -\\frac{1}{3}& \\frac{1}{3} \\end{array} \\right ] ^T \\left [ \\begin{array}{c} 0 \\ -5 \\ -3 \\end{array}\\right ] = \\left [ \\begin{array}{c} 0 \\ -\\frac{3}{2} \\ -1 \\end{array}\\right ]$$\n```{margin}\nResolviendo un sólo sistema de ecuaciones lineales nos ayuda a evitar calcular la inversa de una matriz que implica resolver un sistema de ecuaciones lineales más grande.\n```\nPara el cálculo de $\\nu$ resolvemos el sistema de ecuaciones lineales para el vector de incógnitas $\\nu$: \n$$B^T \\nu = c_B$$\n```{margin}\n$B = \\left [ \\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 2 & 0 \\ 0 & 2 & 3 \\end{array} \\right ]$\n```", "aux = c_B[idx_x_B]\nc_B[idx_x_B] = c_N[idx_x_N]\nc_N[idx_x_N] = aux\n\nnu = np.linalg.solve(B.T, c_B)\n\nprint(nu)", "Por tanto:\n```{margin}\n$c_N= \\left [ \\begin{array}{c}0 \\ 0 \\end{array} \\right ]$\n```", "lambda_N_5 = -c_N[0] + np.dot(nu, A[:,N_list_idx[0]])\n\nlambda_N_4 = -c_N[1] + np.dot(nu, A[:,N_list_idx[1]])\n\nprint(lambda_N_5)\n\nprint(lambda_N_4)", "$\\lambda_{N_5} = -c_{N_5} + \\nu^Ta_5 = 0 + [0 \\quad -\\frac{3}{2} \\quad -1]\\left [ \\begin{array}{c} 0 \\ 0 \\ 1 \\end{array}\\right ] = -1$\n$\\lambda_{N_4} = -c_{N_4} + \\nu^Ta_4 = 0 + [0 \\quad -\\frac{3}{2} \\quad -1] \\left [ \\begin{array}{c} 0 \\ 1 \\ 0 \\end{array}\\right ] = -1.5$\nÍndices de las variables básicas:", "print(B_list_idx)", "Índices de las variables no básicas:", "print(N_list_idx)", "Valores de $\\nu$:", "print(nu)", "```{margin}\nRecuérdese que en el método símplex se mantiene en cada iteración $\\lambda_{B_j} = 0 \\forall j \\in \\mathcal{B}$ y se busca que $\\lambda_{N_j} \\forall j \\in \\mathcal{N}$ sea no negativo para problemas de minimización o no positivo para problemas de maximización.\n```\n```{admonition} Comentario\nEntonces al finalizar el método símplex aplicado al ejemplo prototipo:\n\n\n$\\lambda_{B_1} = \\lambda_{B_2} = \\lambda_{B_3} = 0$ para los índices de las variables básicas $x_1, x_2, x_3$, $\\mathcal{B} = {1, 2, 3}$.\n\n\n$\\lambda_{N_4} = -1.5$, $\\lambda_{N_5} = -1$ para los índices de las variables no básicas $x_4, x_5$, $\\mathcal{N} = {4, 5}$.\n\n\n$\\nu_{1} = 0, \\nu_{2} = -1.5, \\nu_{3} = -1$.\n\n\n```\nComo tenemos un problema de maximización la tasa más alta de mejoramiento de $f_o$ no la da ninguna de las variables no básicas por lo que la solución BF actual $x = \\left [ \\begin{array}{c} x_1\\ x_2\\ x_3\\ x_4 \\ x_5 \\end{array} \\right ]= \\left [ \\begin{array}{c} 2\\ 6\\ 2\\ 0 \\ 0 \\end{array}\\right ]$ es la solución óptima.\nValor de la función objetivo en la solución BF actual: $f_o(x) = (-c)^Tx = b^T(-\\nu) = 36$\n```{margin}\n$\n\\begin{eqnarray}\nf_o(x) &=& (-c)^Tx \\nonumber\\\n&=& -c_B^Tx_B - c_N^T x_N \\nonumber\\\n&=& -c_B^T x_B \\quad \\text{pues } x_N=0\\\n\\end{eqnarray}$ \n```", "print(np.dot(-c_B, x_B))", "```{margin}\n$f_o(x) = b^T(-\\nu)$.\n```", "print(np.dot(b, -nu))", "Algoritmo para un paso del método símplex\nPara un problema de la forma:\n$$\\displaystyle \\min_{x \\in \\mathbb{R}^n} c^Tx$$\n$$\\text{sujeto a:}$$\n$$Ax=b$$\n$$x \\geq 0$$\n\nDados $\\mathcal{B}, \\mathcal{N}, x_B = B^{-1}b \\geq 0, x_N=0$\n\nResolver $B^T \\nu = c_B$ para $\\nu$\nCalcular $\\lambda_N = c_N - N^T\\nu$\nSi $\\lambda \\geq 0$ se encontró un punto óptimo, si no:\n\nSeleccionar $nb \\in \\mathcal{N}$ con $\\lambda_{nb} < 0$ como el índice que entra\nResolver $Bd = A_{nb}$ para $d$.\nSi $d \\leq 0$ detenerse, el problema es no acotado.\nCalcular $x_{nb}^+ = \\min{\\frac{x_{B_i}}{d_i} : d_i >0}$ y sea $ba$ el índice que minimiza.\nActualizar $x_B = x_B - dx^+{nb}$, $x_N^+ = (0, \\dots, 0, x{nb}^+, 0, \\dots, 0)^T$\nCambiar $\\mathcal{B}$ al añadir $nb$ y remover la variable básica correspondiente a la columna $ba$ de B.\n\n\n\n```{admonition} Ejercicio\n:class: tip\nResuelve con el método de símplex y corrobora con algún software tu respuesta:\n$$\\displaystyle \\min_{x \\in \\mathbb{R}^3}x_1 + x_2 - 4x_3$$\n$$\\text{sujeto a:}$$\n$$x_1 + x_2 + 2x_3 \\leq 9$$\n$$x_1 + x_2 - x_3 \\leq 2$$\n$$-x_1 + x_2 + x_3 \\leq 4$$\n$$x_1 \\geq 0, x_2 \\geq 0 , x_3 \\geq 0$$\n```\nConsideraciones sobre el método símplex\n\n\nEl método símplex termina con una solución BF si el PL no tiene variables básicas degeneradas y tiene una región acotada.\n\n\nAl método símplex anteriormente descrito se le tienen que añadir funcionalidades para realizar una implementación que maneje los siguientes puntos:\n\n\n-)Sobre empates en la variable básica, no básica.\n-)Sobre el álgebra matricial numérica.\n-)Sobre variables básicas degeneradas.\n-)Sobre variables de decisión con cotas inferiores distintas de cero, con cotas superiores.\nNo se profundizará sobre los puntos anteriores y se sugiere ir a las referencias de esta nota para su consulta.\nPL's large scale\nLos PL's que modelan aplicaciones reales tienden a encontrarse en la clasificación large scale. Tal término aunque es ambiguo pues depende la máquina en la que se realice el cómputo e involucra el número de variables o parámetros y cantidad de almacenamiento para datos, lo asociamos con problemas de optimización con restricciones que tienen un número de variables y restricciones mayor o igual a $10^5$ (ambas).\nEnunciado de un ejemplo para un problema medium scale\nSupóngase que al igual que en el {ref}ejemplo prototipo &lt;EJPROTOTIPO&gt; una compañía desea resolver un problema de mezcla de productos. Tal compañía tiene $10$ plantas en varias partes del mundo. Cada una elabora los mismos $10$ productos y después los vende en su región. Se conoce la demanda (ventas potenciales) de cada producto en cada planta en cada uno de los próximos $10$ meses. Aunque la cantidad de producto vendido en un mes dado no puede exceder la demanda, la cantidad producida puede ser mayor, y la cantidad en exceso se debería almacenar en inventario (con un costo unitario por mes) para su venta posterior. Cada unidad de cada producto ocupa el mismo espacio en almacén y cada planta tiene un límite superior para el número total de unidades que se puede guardar (la capacidad del inventario).\nCada planta realiza los $10$ procesos de producción con máquinas y tales máquinas se pueden usar para producir cualquiera de los $10$ productos. Tanto el costo de producción por unidad como la tasa de producción de un producto (número de unidades producidas por día dedicado a ese producto) dependen de la combinación de plantas y máquinas involucradas y no del mes que se realizará la producción. El número de días hábiles (días de producción disponibles) varía un poco de un mes a otro.\nComo algunas plantas y máquinas pueden producir un producto dado ya sea a menor costo o a una tasa más rápida que otras plantas y máquinas, en ocasiones vale la pena enviar algunas unidades del producto de una planta a otra para que esta última las venda. Existe cierto costo asociado con cada unidad enviada de cualquier producto de cada combinación de una planta que envía (planta origen) y una planta que recibe (planta destino), donde este costo unitario es el mismo para todos los productos.\nLa administración necesita determinar cuántas unidades de cada producto debe producir en cada máquina de cada planta cada mes, al igual que cuántas unidades de cada producto debe vender cada planta cada mes y cuántas unidades de cada producto debe enviar cada planta cada mes a cada una de las otras plantas.\nEl objetivo es encontrar el plan factible que maximice la ganancia total: ingreso por ventas totales menos la suma de los costos totales de producción, inventario y envío.\nDebido a los costos de inventario y a que las capacidades de almacenamiento son limitadas, es necesario mantener un registro de la cantidad de cada producto que se guarda en cada planta durante cada mes. \nVariables de decisión\nEl modelo PL tiene cuatro tipo de variables de decisión: cantidades de producción, cantidades de inventario, cantidades de venta y cantidades enviadas. Con $10$ plantas, $10$ máquinas, $10$ productos y $10$ meses da un total de $21, 000$ variables de decisión.\n\n\n$10,000$ variables de producción: una por cada combinación de planta, máquina, producto y mes.\n\n\n$1,000$ variables de inventario: una por cada combinación de planta, producto y mes.\n\n\n$1,000$ variables de ventas: una por cada combinación de planta, producto y mes.\n\n\n$9,000$ variables de envío: una por cada combinación de producto, mes, planta (planta origen) y otra planta (la planta destino). (Para las combinaciones de las plantas origen-destino, realícense combinaciones de $10$ en $2$ y multiplíquese por $2$ por la designación de planta origen-destino).\n\n\nFunción objetivo\nAl multiplicar cada variable de decisión por el costo unitario o ingreso unitario correspondiente y sumar según cada tipo, se tiene: \"Maximizar ganancia=ingresos totales por ventas - costo total\" donde: \"costo total = costo total de producción + costo total de inventario + costo total de envío\".\nRestricciones funcionales\nLas $21,000$ variables de decisión deben satisfacer las restricciones de no negatividad al igual que los cuatro tipos de restricciones funcionales: de capacidad de producción, de balanceo de plantas (restricciones de igualdad que proporcionan valores adecuados para las variables de inventario), de inventario máximo y de ventas máximas. En total se tienen $3,100$ restricciones funcionales.\n\n$1,000$ restricciones de capacidad de producción, una por cada combinación de planta, máquina y mes:\n\n\"Días de producción usados $\\leq$ días de producción disponibles,\ndonde: el lado izquierdo es la suma de $10$ fracciones, una por cada producto. Cada fracción es la cantidad de ese producto (una variable de decisión) dividida entre la tasa de producción del producto (una constante dada).\n\n$1,000$ restricciones de balance de plantas, una por cada combinación de planta, producto y mes:\n\n\"Cantidad producida + inventario del mes pasado + cantidad recibida = ventas + inventario actual + cantidad enviada\",\ndonde: la \"cantidad producida\" es la suma de las variables de decisión que representan las cantidades de producción de las máquinas, la \"cantidad recibida\" es la suma de las variables de decisión que representan las cantidades enviadas desde otras plantas y la \"cantidad enviada\" es la suma de las variables de decisión correspondientes a las cantidades que se mandan a las otras plantas.\n\n$100$ restricciones de inventario máximo, una por cada combinación de planta y mes:\n\n\"Inventario total $\\leq$ capacidad del inventario\",\ndonde: el lado izquierdo es la suma de las variables de decisión que representan las cantidades de inventario de los productos individuales.\n\n$1,000$ restricciones de ventas, una por cada combinación de planta, producto y mes:\n\n\"Ventas $\\leq$ demanda\"\n¿Cómo escribir el problema anterior y resolverlo en un lenguaje de programación?\nPodemos utilizar lenguajes de modelado, modeling languages, que permiten el uso de solvers como:\n\n\ncvxpy, cvx, cvxr, Convex, cvxopt\n\n\nor-tools\n\n\nJuMP\n\n\nAMPL, AMPL Python API, AMPL R API, AMPL MATLAB API\n\n\n```{admonition} Comentario\nEl problema anterior puede resolverse eficientemente con el método símplex. El método símplex resuelve en la práctica problemas con restricciones del orden de $10^5$ de manera eficiente. No tiene problemas manejando un número grande de variables, por ejemplo mayor a $10^5$ pero sí afecta su desempeño computacional aumentar el número de restricciones, por ejemplo con un número mayor o igual de $10^6$ restricciones.\n```\nUnas palabras sobre PL entera y métodos\nUno de los supuestos de la PL es la de divisibilidad que requiere que las variables de decisión puedan tomar valores no enteros. En el {ref}ejemplo de flujo en redes &lt;EJFLUJOENREDESYPL&gt; se vio que con cvxpy se puede resolver tal problema imponiendo la restricción que las variables de decisión sean enteras. Esto ocurre en muchos problemas prácticos por ejemplo si consideramos a personas, cajas que contengan materiales o vehículos. Un PL al tener la restricción anterior se le nombra problema de programación entera (PE).\n```{admonition} Observación\n:class: tip\nSiendo rigurosos el nombre sería programación lineal entera pero es más común omitir \"lineal\" a menos que se encuentre en un contexto de programación no lineal entera.\n```\n```{admonition} Comentarios\n\n\nSi un PL admite variables de decisión que sean enteras y otras cumplan con el supuesto de divisibilidad entonces se nombra al problema de optimización programación entera mixta (PEM).\n\n\nSi las variables de decisión únicamente toman valores binarios (por ejemplo decisiones \"sí\", \"no\") se le nombra al problema de optimización programación entera binaria (PEB).\n\n\n```\nEjemplo prototipo de PEB\nSuponga que una compañía analiza la posibilidad de llevar a cabo una expansión mediante la construcción de fábricas ya sea en Monterrey o en Torreón. También piensa en construir, a lo sumo, un nuevo almacén, pero la decisión sobre el lugar en donde lo instalará está restringida a la ciudad donde se construyan las fábricas. Se presenta la siguiente tabla con datos del valor presente neto (VPN, rendimiento total que toma en cuenta el valor del dinero en el tiempo), el capital requerido y el disponible para llevar a cabo tal obra:\n|Número de decisión| Pregunta| Variable de decisión| VPN | Capital requerido|\n|:---:|:---:|:---:|:---:|:---:|\n|1|¿Construir la fábrica en Monterrey?| $x_1$| 9 millones| 6 millones|\n|2|¿Construir la fábrica en Torreón?|$x_2$|5 millones| 3 millones|\n|3|¿Construir el almacén en Monterrey?|$x_3$|6 millones|5 millones|\n|4|¿Construir el almacén en Torreón?|$x_4$|4 millones|2 millones|\n|-|-|-|Capital disponible:|10 millones|\nEn la tabla anterior se muestra el VPN de cada alternativa. En la última columna se proporciona el capital que se requiere (incluido el VPN) para las inversiones, donde el capital total disponible es de 10 millones de pesos. El objetivo es encontrar la combinación factible de alternativas que maximice el VPN.\nEl modelo PEB\nSean las variables de decisión:\n$$x_j = \\begin{cases} 1 & \\text{ si la decisión } j \\text{ es sí }\\\n0 & \\text{si la decisión } j \\text{ es no }\n\\end{cases}, \\quad j=1, 2, 3, 4\n$$\ny la función objetivo: $f_o(x) = 9x_1 + 5x_2 + 6x_3 + 4x_4$ que represente el VPN de estas decisiones.\nDados los datos de la tabla anterior una restricción del modelo es:\n$$6x_1 + 3x_2 + 5x_3 + 2x_4 \\leq 10$$\npor el capital disponible.\nLas siguientes restricciones tienen que ver con \n\n\nalternativas mutuamente excluyentes dado que la compañía quiere construir cuando mucho un almacén nuevo.\n\n\ndecisiones condicionales dado que la compañía consideraría la construcción de un almacén en determinada ciudad sólo si la nueva fábrica va a estar ahí.\n\n\nTales restricciones se modelan como:\n$$x_3 + x_4 \\leq 1$$\npara las alternativas mutuamente excluyentes y\n$$x_3 \\leq x_1$$\n$$x_4 \\leq x_2$$\npara las decisiones condicionales. \nEntonces el PEB es:\n$$\\displaystyle \\max_{x \\in \\mathbb{R}^4} 9x_1 + 5x_2 + 6x_3 + 4x_4$$\n$$\\text{sujeto a: }$$\n$$6x_1 + 3x_2 + 5x_3 + 2x_4 \\leq 10$$\n$$x_3 + x_4 \\leq 1$$\n$$-x_1 + x_3 \\leq 0$$\n$$-x_2 + x_4 \\leq 0$$\n$$x_j \\in {0,1}, \\quad j=1, 2, 3, 4$$\n```{admonition} Observaciones\n:class: tip\n\n\nLa última restricción es equivalente a $0 \\leq x_j \\leq 1$ y $x_j$ entera.\n\n\nSi en el problema se especifica que se quiere construir exactamente una fábrica en Monterrey o Torreón no importando la ciudad, entonces se añadiría la restricción $x_1 + x_2 = 1$.\n\n\n```\nConsideraciones sobre los modelos de PE\nPodría pensarse que los PE son más sencillos de resolver que los PL, lo cual no es correcto en general principalmente por algunas razones que se enlistan a continuación:\n\n\nUn número finito de soluciones factibles no asegura que un problema se pueda resolver. Por ejemplo, en el PEB si se tienen $n$ variables, existen $2^n$ soluciones que considerar aproximadamente (quizás algunas se puedan eliminar). Con $n=10$ se tienen mil soluciones y $n=30$ más de mil millones. Por lo que enlistar las soluciones factibles no es un buen método para resolver problemas de PEB (o PE).\n\n\nResolver un PL relajado que resulta del PE eliminando la restricción de variables enteras no siempre resuelve el PE original. Esto sólo ocurre en algunos casos especiales como es el problema del flujo de costo mínimo con parámetros enteros. Ver {ref}ejemplo de flujo en redes &lt;EJFLUJOENREDESYPL&gt; redondeando la solución de scipy que se obtuvo. La clave de la eficiencia del método símplex es la continuidad de las variables de decisión.\n\n\nLos tres factores determinantes de la dificultad computacional de un problema de PE son:\n\n\n1)El número de variables enteras\n2)Variables enteras ¿binarias o generales?\n3)Estructura especial del problema (si por ejemplo pueden eliminarse variables o restricciones dadas las características del problema).\nContrástese esto con la situación que la presencia de restricciones en un PL es más importante que el número de variables. En la PE el número de restricciones es secundario a los factores anteriores.\n```{admonition} Observación\n:class: tip\nExisten casos en los que aumentar el número de restricciones disminuye el número de soluciones factibles y por tanto el tiempo de cálculo.\n```\n```{admonition} Comentarios\n\n\nAunque resulta tentador resolver los PE como si fueran PL (trabajarlos como relajados) y redondear el resultado se corren riesgos:\n\n\nNo necesariamente una solución óptima del PL será factible después de redondearla.\n\n\nNo existe garantía de que la solución redondeada sea la solución óptima del PE.\n\n\n\n\nAún así hay algoritmos de PE que utilizan en sus pasos intermedios resolver PL relajados.\n```\nSobre algoritmos de PE\nLos ejemplos anteriores muestran la cantidad de variables y restricciones que pueden surgir en un problema real debido al número de combinaciones posibles que resultan. En particular el área de optimización que se relaciona con el número de combinaciones que resultan de enumerar el conjunto solución en el que las soluciones factibles son discretas es la área de optimización combinatoria.\nSi bien el método símplex y métodos de puntos interiores (ver {ref}introducción a los métodos de puntos interiores &lt;INTMETPIN&gt;) han probado ser métodos para abordar una amplia variedad de problemas prácticos de PL no siempre funcionan en problemas que surgen en la optimización combinatoria. Por ejemplo, en un problema en el que el número de restricciones funcionales sea mayor al del número de variables, el método símplex deberá realizar un esfuerzo computacional grande. Por lo que se han desarrollado algoritmos específicos para la PE. Dentro de los algoritmos más populares en los PE, PEM se encuentra el de ramificación y acotamiento (aplica la idea de divide y vencerás) y cortes de Gomory.\n```{admonition} Comentario\nPodemos utilizar el método símplex para resolver PE's si la matriz de la restricciones del problema cumple con la propiedad de total unimodularity.\n```\nSobre métodos heurísticos en optimización combinatoria\nLos métodos heurísticos y meta heurísticas (estrategias para mejorar o diseñar métodos heurísticos) encuentran una buena solución factible que al menos está razonablemente cerca de ser óptima, también pueden reportar que no se encontró tales soluciones. Sin embargo son métodos que no dan una garantía acerca de la calidad de la solución que se obtiene.\nLos métodos heurísticos se desarrollaron principalmente para el manejo de problemas large scale sean PL's o de optimización combinatoria. Típicamente se han ajustado a problemas específicos en lugar de aplicarse a una variedad de aplicaciones. \nComo se mencionó anteriormente no existe garantía de que la mejor solución que se encuentre con un método heurístico sea una solución óptima o incluso que esté cerca de serlo. Por tanto, siempre que sea posible resolver un problema mediante un algoritmo que pueda garantizar optimalidad, debe usarse éste en lugar de uno heurístico. El papel de los métodos heurísticos es abordar problemas que son muy grandes y complicados como para resolverlos por medio de algoritmos exactos.\nEjemplos: Vehicle routing problem,aka VRP, y Travelling Salesman Problem, aka TSP\nProblema del VRP: encontrar las rutas óptimas para vehículos de modo que satisfagan las demandas de clientes. Este problema generaliza al del TSP: dada una lista de ciudades y las distancias entre cada par de ellas, ¿cuál es la ruta más corta posible que visita cada ciudad exactamente una vez y al finalizar regresa a la ciudad origen?\n```{admonition} Observación\n:class: tip\nUn TSP con $10$ ciudades requiere un poco menos de $200,000$ soluciones factibles que deben ser consideradas, un problema con $20$ ciudades tiene alrededor de $10^{16}$ y uno con $50$ ciudades tiene alrededor de $10^{62}$.\n```\nEjemplos de algoritmos heurísticos o meta heurísticas\nUna lista de algoritmos que se clasifican como heurísticos o meta heurísticos utilizados en optimización en general (no sólo combinatoria) son:\n\n\nGreedy algorithm\n\n\nAlpha-beta pruning\n\n\nBest-first search\n\n\nSimulated annealing\n\n\nTabu search\n\n\nHill climbing\n\n\nGenetic algorithm\n\n\nParticle swarm optimization\n\n\nAnt colony optimization\n\n\nGuided local search\n\n\nChristofides algorithm\n\n\nNelder–Mead method\n\n\nEl paquete or-tools es un paquete de Python que tiene métodos como los anteriores y también para resolver PL's y de optimización de flujo en redes. Ver About OR-Tools, routing_options, routing.\nLa librería concorde escrita en C resuelve tipo de problemas TSP y de optimización de flujo en redes con cómputo en paralelo. Ver TSPLIB para instancias de problemas TSP.\n```{admonition} Ejercicios\n:class: tip\n1.Resuelve los ejercicios y preguntas de la nota.\n```\nPreguntas de comprehensión\n1)Escribe el modelo de programación lineal en forma estándar con nomenclatura matemática y asócialo con la forma estándar de un problema de optimización convexa. ¿Cómo se escribe el problema de optimización convexa en el caso de programación lineal?. \n2)Describe con palabras coloquiales lo que se desea realizar en un programa lineal y algunos de sus supuestos.\n3)Describe al problema de mezcla de productos.\n4)¿Qué es un poliedro y cómo se obtienen poliedros en un PL?\n5)¿Qué es una solución factible en un vértice? \n6)¿Qué es una solución básica factible? \n7)¿Qué es la basis y nonbasis matrix?\n8)¿Qué es una variable básica degenerada? e investiga qué mensaje se obtiene en un programa de computadora que tenga implementado el método símplex en un ejemplo en el que se tenga tales variables básicas.\n9)Si se conocen los valores numéricos de las variables básicas y no básicas que se obtienen en una solución no degenerada con el método símplex ¿cómo puede distinguirse si tal solución es una solución BF? \n10)¿Qué es una variable de holgura y para qué fueron utilizadas en la nota?\n11)Describe el proceso de pivoteo en el método símplex.\n12)Investiga qué es lo que puede concluirse a partir del método símplex si se elige una variable no básica que puede entrar al conjunto de variables básicas y en la prueba del cociente mínimo todos los denominadores son negativos.\n13)¿En qué casos podemos usar el método símplex para resolver programas lineales enteros?\nReferencias:\n\n\nF. Hillier, G. Lieberman, Introduction to Operations Research, Mc Graw Hill, 2014.\n\n\nR. K. Ahuja, T. L. Magnanti, J. B. Orlin, Network Flows, Theory, Algorithms and Applications, Prentice Hall, 1993.\n\n\nM. S. Bazaraa, J. J. Jarvis, H. D. Sherali, Linear Programming and Network Flows, Wiley, 2010.\n\n\nJ. Nocedal, S. J. Wright, Numerical Optimization, Springer, 2006." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jmschrei/pomegranate
tutorials/C_Feature_Tutorial_2_Out_Of_Core_Learning.ipynb
mit
[ "Out-of-Core Learning\nauthor: Jacob Schreiber <br>\ncontact: [email protected] <br>\nOut-of-core learning refers to the process of training a model on an amount of data that cannot fit in memory. There are several approaches that can be described as out-of-core, but here we refer to the ability to derive exact updates to a model from a massive data set, despite not being able to fit the entire thing in memory.\nThis out-of-core learning approach is implemented for all of pomegranate's models using two methods. The first is a summarize method that will take in a batch of data and reduce it down to additive sufficient statistics. Because these summaries are additive, after the first call, these summaries are added to the previously stored summaries. Once the entire data set has been seen, the stored sufficient statistics will be identical to those that would have been derived if the entire data set had been seen at once. The second method is the from_summaries method, which uses the stored sufficient statistics to derive parameter updates for the model.\nA common solution to having too much data is to randomly select an amount of data that does fit in memory to use in the place of the full data set. While simple to implement, this approach is likely to yield lower performance models because it is exposed to less data. However, by using out-of-core learning, on can train their models on a massive amount of data without being limited by the amount of memory their computer has.", "%matplotlib inline\nimport time\nimport pandas\nimport random\nimport numpy\nimport matplotlib.pyplot as plt\nimport seaborn; seaborn.set_style('whitegrid')\nimport itertools\n\nfrom pomegranate import *\n\nrandom.seed(0)\nnumpy.random.seed(0)\nnumpy.set_printoptions(suppress=True)\n\n%load_ext watermark\n%watermark -m -n -p numpy,scipy,pomegranate", "1. Training a Probability Distribution\nLet's start off simple with training a multivariate Gaussian distribution in an out-of-core manner. First, we'll generate some random data.", "X = numpy.random.normal([5, 7], [1.5, 0.4], size=(1000, 2))", "Then we can make a blank distribution with 2 dimensions. This is equivalent to filling in the mean and standard deviation with dummy values that will be overwritten, and don't effect the calculation.", "d1 = MultivariateGaussianDistribution.blank(2)\nd1", "Now let's summarize through a few batches of data.", "d1.summarize(X[:250])\nd1.summarize(X[250:500])\nd1.summarize(X[500:750])\nd1.summarize(X[750:])\n\nd1.summaries", "Now that we've seen the entire data set let's use the from_summaries method to update the parameters.", "d1.from_summaries()\nd1", "And what do we get if we learn directly from the data?", "MultivariateGaussianDistribution.from_samples(X)", "The exact same model.\n2. Training a Mixture Model\nThis summarization option enables a variety of different training strategies that can be written by hand. This notebook focuses on out-of-core learning, so let's make a data set and \"read it in\" one batch at a time to train a mixture model with a custom training function. We'll make another data set here, but one could easily have a function that read through some number of lines in a CSV, or loaded up a chunk from a numpy memory map, or whatever other massive data store you had.", "X = numpy.concatenate([numpy.random.normal(0, 1, size=(5000, 10)), numpy.random.normal(1, 1, size=(7500, 10))])\nn = X.shape[0]\n\nidx = numpy.arange(n)\nnumpy.random.shuffle(idx)\n\nX = X[idx]", "First we have to initialize our model. We can do that either by hand to some value we think is good, or by fitting to the first chunk of data, anticipating that it will be a decent representation of the remainder. We can also calculate the log probability of the data set now to see how much we improved.", "# First we initialize our model on some small chunk of data.\nmodel = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[:200], max_iterations=1, init='first-k')\n\n# The base performance on the data set.\nbase_logp = model.log_probability(X).sum()\n\nfrom tqdm import tqdm_notebook as tqdm\n\n# Now we write our own iterator. This outer loop will be the number of times we iterate---hard coded to 5 in this case.\nfor iteration in tqdm(range(5)):\n\n # This internal loop goes over chunks from the data set. We're just loading chunks of a fixed size iteratively\n # until we've seen the entire data set.\n for i in range(10):\n model.summarize(X[i * (n // 10):(i+1) * (n //10)])\n \n # Now we've seen the entire data set and summarized it. We can update the parameters now.\n model.from_summaries() ", "How we does did our model do on the data originally, and how well does it do now?", "base_logp, model.log_probability(X).sum()", "Looks like a decent improvement.\nNow, let's compare to having fit our model to the entire loaded data set for five epochs.", "model = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[:200], max_iterations=1, init='first-k')\nbase_logp = model.log_probability(X).sum()\n\nmodel.fit(X, max_iterations=5)\nbase_logp, model.log_probability(X).sum()", "Looks like the exact same values.\nYou may ask why we bothered to write a summarization function for data that did fit in memory. The purpose here was entirely illustrative. Our function that use the summarize method would scale to any amount of data that could be loaded in batches, whereas the fit function can only scale to the amount of data that can fit in memory. However, they yield identical answers at the end, suggesting that if one wanted to scale to massive data sets but still get the same performance, this summarize function is the way to go." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ThyrixYang/LearningNotes
MOOC/stanford_cnn_cs231n/assignment2/FullyConnectedNets.ipynb
gpl-3.0
[ "Fully-Connected Neural Nets\nIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.\nIn this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:\n```python\ndef layer_forward(x, w):\n \"\"\" Receive inputs x and weights w \"\"\"\n # Do some computations ...\n z = # ... some intermediate value\n # Do some more computations ...\n out = # the output\ncache = (x, w, z, out) # Values we need to compute gradients\nreturn out, cache\n```\nThe backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:\n```python\ndef layer_backward(dout, cache):\n \"\"\"\n Receive derivative of loss with respect to outputs and cache,\n and compute derivative with respect to inputs.\n \"\"\"\n # Unpack cache values\n x, w, z, out = cache\n# Use values in cache to compute derivatives\n dx = # Derivative of loss with respect to x\n dw = # Derivative of loss with respect to w\nreturn dx, dw\n```\nAfter implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.\nIn addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.", "# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in list(data.items()):\n print(('%s: ' % k, v.shape))", "Affine layer: foward\nOpen the file cs231n/layers.py and implement the affine_forward function.\nOnce you are done you can test your implementaion by running the following:", "# Test the affine_forward function\n\nnum_inputs = 2\ninput_shape = (4, 5, 6)\noutput_dim = 3\n\ninput_size = num_inputs * np.prod(input_shape)\nweight_size = output_dim * np.prod(input_shape)\n\nx = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)\nw = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)\nb = np.linspace(-0.3, 0.1, num=output_dim)\n\nout, _ = affine_forward(x, w, b)\ncorrect_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],\n [ 3.25553199, 3.5141327, 3.77273342]])\n\n# Compare your output with ours. The error should be around 1e-9.\nprint('Testing affine_forward function:')\nprint('difference: ', rel_error(out, correct_out))", "Affine layer: backward\nNow implement the affine_backward function and test your implementation using numeric gradient checking.", "# Test the affine_backward function\nnp.random.seed(231)\nx = np.random.randn(10, 2, 3)\nw = np.random.randn(6, 5)\nb = np.random.randn(5)\ndout = np.random.randn(10, 5)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)\n\n_, cache = affine_forward(x, w, b)\ndx, dw, db = affine_backward(dout, cache)\n\n# The error should be around 1e-10\nprint('Testing affine_backward function:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))", "ReLU layer: forward\nImplement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:", "# Test the relu_forward function\n\nx = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)\n\nout, _ = relu_forward(x)\ncorrect_out = np.array([[ 0., 0., 0., 0., ],\n [ 0., 0., 0.04545455, 0.13636364,],\n [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])\n\n# Compare your output with ours. The error should be around 5e-8\nprint('Testing relu_forward function:')\nprint('difference: ', rel_error(out, correct_out))", "ReLU layer: backward\nNow implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:", "np.random.seed(231)\nx = np.random.randn(10, 10)\ndout = np.random.randn(*x.shape)\n\ndx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)\n\n_, cache = relu_forward(x)\ndx = relu_backward(dout, cache)\n\n# The error should be around 3e-12\nprint('Testing relu_backward function:')\nprint('dx error: ', rel_error(dx_num, dx))", "\"Sandwich\" layers\nThere are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.\nFor now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:", "from cs231n.layer_utils import affine_relu_forward, affine_relu_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 4)\nw = np.random.randn(12, 10)\nb = np.random.randn(10)\ndout = np.random.randn(2, 10)\n\nout, cache = affine_relu_forward(x, w, b)\ndx, dw, db = affine_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)\n\nprint('Testing affine_relu_forward:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))", "Loss layers: Softmax and SVM\nYou implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.\nYou can make sure that the implementations are correct by running the following:", "np.random.seed(231)\nnum_classes, num_inputs = 10, 50\nx = 0.001 * np.random.randn(num_inputs, num_classes)\ny = np.random.randint(num_classes, size=num_inputs)\n\ndx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)\nloss, dx = svm_loss(x, y)\n\n# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9\nprint('Testing svm_loss:')\nprint('loss: ', loss)\nprint('dx error: ', rel_error(dx_num, dx))\n\ndx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)\nloss, dx = softmax_loss(x, y)\n\n# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8\nprint('\\nTesting softmax_loss:')\nprint('loss: ', loss)\nprint('dx error: ', rel_error(dx_num, dx))", "Two-layer network\nIn the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.\nOpen the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.", "np.random.seed(231)\nN, D, H, C = 3, 5, 50, 7\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=N)\n\nstd = 1e-3\nmodel = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)\n\nprint('Testing initialization ... ')\nW1_std = abs(model.params['W1'].std() - std)\nb1 = model.params['b1']\nW2_std = abs(model.params['W2'].std() - std)\nb2 = model.params['b2']\nassert W1_std < std / 10, 'First layer weights do not seem right'\nassert np.all(b1 == 0), 'First layer biases do not seem right'\nassert W2_std < std / 10, 'Second layer weights do not seem right'\nassert np.all(b2 == 0), 'Second layer biases do not seem right'\n\nprint('Testing test-time forward pass ... ')\nmodel.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)\nmodel.params['b1'] = np.linspace(-0.1, 0.9, num=H)\nmodel.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)\nmodel.params['b2'] = np.linspace(-0.9, 0.1, num=C)\nX = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T\nscores = model.loss(X)\ncorrect_scores = np.asarray(\n [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],\n [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],\n [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])\nscores_diff = np.abs(scores - correct_scores).sum()\nassert scores_diff < 1e-6, 'Problem with test-time forward pass'\n\nprint('Testing training loss (no regularization)')\ny = np.asarray([0, 5, 1])\nloss, grads = model.loss(X, y)\ncorrect_loss = 3.4702243556\nassert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'\n\nmodel.reg = 1.0\nloss, grads = model.loss(X, y)\ncorrect_loss = 26.5948426952\nassert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'\n\nfor reg in [0.0, 0.7]:\n print('Running numeric gradient check with reg = ', reg)\n model.reg = reg\n loss, grads = model.loss(X, y)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))", "Solver\nIn the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.\nOpen the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.", "model = TwoLayerNet(reg=1e-2, hidden_dim=200)\noptim_config = {\n 'learning_rate': 1e-3\n}\nsolver = Solver(model, data, \n num_train_samples=20000,\n num_epochs=15, \n batch_size=500, \n num_val_samples=1000,\n optim_config=optim_config,\n print_every=30000,\n lr_decay=0.95)\nsolver.train()\n##############################################################################\n# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #\n# 50% accuracy on the validation set. #\n##############################################################################\npass\n##############################################################################\n# END OF YOUR CODE #\n##############################################################################\n\n# Run this cell to visualize training loss and train / val accuracy\n\nplt.subplot(2, 1, 1)\nplt.title('Training loss')\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('Iteration')\n\nplt.subplot(2, 1, 2)\nplt.title('Accuracy')\nplt.plot(solver.train_acc_history, '-o', label='train')\nplt.plot(solver.val_acc_history, '-o', label='val')\nplt.plot([0.5] * len(solver.val_acc_history), 'k--')\nplt.xlabel('Epoch')\nplt.legend(loc='lower right')\nplt.gcf().set_size_inches(15, 12)\nplt.show()", "Multilayer network\nNext you will implement a fully-connected network with an arbitrary number of hidden layers.\nRead through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.\nImplement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.\nInitial loss and gradient check\nAs a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?\nFor gradient checking, you should expect to see errors around 1e-6 or less.", "np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))", "As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.", "# TODO: Use a three-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 1e-1\nlearning_rate = 1e-3\nmodel = FullyConnectedNet([100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=1000, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()", "Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.", "# TODO: Use a five-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nlearning_rate = 1e-3\nweight_scale = 1e-1\nmodel = FullyConnectedNet([100, 100, 100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10000, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()", "Inline question:\nDid you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?\nAnswer:\n5 layer net is far more sensitive....\nUpdate rules\nSo far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.\nSGD+Momentum\nStochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.\nOpen the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.", "from cs231n.optim import sgd_momentum\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nv = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-3, 'velocity': v}\nnext_w, _ = sgd_momentum(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],\n [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],\n [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],\n [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])\nexpected_velocity = np.asarray([\n [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],\n [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],\n [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],\n [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])\n\nprint('next_w error: ', rel_error(next_w, expected_next_w))\nprint('velocity error: ', rel_error(expected_velocity, config['velocity']))", "Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.", "num_train = 4000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\n\nfor update_rule in ['sgd', 'sgd_momentum']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': 1e-2,\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print()\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in list(solvers.items()):\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "RMSProp and Adam\nRMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.\nIn the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.\n[1] Tijmen Tieleman and Geoffrey Hinton. \"Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.\" COURSERA: Neural Networks for Machine Learning 4 (2012).\n[2] Diederik Kingma and Jimmy Ba, \"Adam: A Method for Stochastic Optimization\", ICLR 2015.", "# Test RMSProp implementation; you should see errors less than 1e-7\nfrom cs231n.optim import rmsprop\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\ncache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'cache': cache}\nnext_w, _ = rmsprop(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],\n [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],\n [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],\n [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])\nexpected_cache = np.asarray([\n [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],\n [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],\n [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],\n [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])\n\nprint('next_w error: ', rel_error(expected_next_w, next_w))\nprint('cache error: ', rel_error(expected_cache, config['cache']))\n\n# Test Adam implementation; you should see errors around 1e-7 or less\nfrom cs231n.optim import adam\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nm = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\nv = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}\nnext_w, _ = adam(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\n [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],\n [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],\n [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])\nexpected_v = np.asarray([\n [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],\n [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],\n [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],\n [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])\nexpected_m = np.asarray([\n [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],\n [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],\n [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],\n [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])\n\nprint('next_w error: ', rel_error(expected_next_w, next_w))\nprint('v error: ', rel_error(expected_v, config['v']))\nprint('m error: ', rel_error(expected_m, config['m']))", "Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:", "learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}\nfor update_rule in ['adam', 'rmsprop']:\n print('running with ', update_rule)\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rates[update_rule]\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print()\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in list(solvers.items()):\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Train a good model!\nTrain the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.\nIf you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.\nYou might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.", "best_model = None\n################################################################################\n# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #\n# batch normalization and dropout useful. Store your best model in the #\n# best_model variable. #\n################################################################################\nupdate_rule = 'adam'\nmodel = FullyConnectedNet([200, 200, 100, 100, 100], weight_scale=5e-2)\nsolver = Solver(model, data,\n num_epochs=10, batch_size=300,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rates[update_rule]\n },\n verbose=True)\nsolver.train()\nbest_model = solver\n################################################################################\n# END OF YOUR CODE #\n################################################################################", "Test you model\nRun your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.", "y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1)\ny_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1)\nprint('Validation set accuracy: ', (y_val_pred == data['y_val']).mean())\nprint('Test set accuracy: ', (y_test_pred == data['y_test']).mean())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CINPLA/exdir
tests/benchmarks/benchmarks.ipynb
mit
[ "Benchmarks for Exdir\nThis notebook contains a number of benchmarks for Exdir.\nThey compare the performance of Exdir with h5py.\nWarning: Please make sure the files are not created in a folder managed by Syncthing, Dropbox or any other file synchronization system. \nWe will be making a large number of changes to the files and a file synchronization system will reduce performance and possibly become out of sync in the process.\nNote: You may experience unreliable results on some systems, where the numbers vary greatly between each run. \nThis can be caused by the large number of I/O operations performed by the benchmarks. \nWe have tried to improve the reliability by adding a call to time.sleep between setting up the benchmark and running the benchmark.\nThis should allow the system to completely flush to disk the changes made while setting up and have the benchmark run unaffected.\nHowever, if you still experience unreliable results, you may want to try to set up a RAM disk and change the below paths to read /tmp/ramdis/test.exdir and /tmp/ramdisk/test.h5:\nmkdir /tmp/ramdisk/\nsudo mount -t tmpfs -o size=2048M tmpfs /tmp/ramdisk/\n\nHelper functions\nThe following functions are used to set up an exdir or hdf5 file for benchmarking:", "import exdir\nimport os\nimport shutil\nimport h5py\n\ndef setup_exdir():\n testpath = \"test.exdir\"\n if os.path.exists(testpath):\n shutil.rmtree(testpath)\n f = exdir.File(testpath)\n return f, testpath\n\ndef setup_exdir_no_validation():\n testpath = \"test.exdir\"\n if os.path.exists(testpath):\n shutil.rmtree(testpath)\n f = exdir.File(testpath, name_validation=exdir.validation.minimal)\n return f, testpath\n\ndef teardown_exdir(f, testpath):\n f.close()\n shutil.rmtree(testpath)\n\ndef setup_h5py():\n testpath = \"test.h5\"\n if os.path.exists(testpath):\n os.remove(testpath)\n f = h5py.File(testpath)\n return f, testpath\n\n \ndef teardown_h5py(f, testpath):\n f.close()\n os.remove(testpath)", "The following function is used to run the different benchmarks.\nIt takes a target function to test, a setup function to create the file and the number of iterations the function should be run to get a decent average:", "import time\n\ndef benchmark(target, setup=None, teardown=None, iterations=10):\n total_time = 0\n setup_teardown_start = time.time()\n for i in range(iterations):\n data = tuple()\n if setup is not None:\n data = setup()\n time.sleep(1) # allow changes to be flushed to disk\n start_time = time.time()\n target(*data)\n end_time = time.time()\n total_time += end_time - start_time\n if teardown is not None:\n teardown(*data)\n setup_teardown_end = time.time()\n total_setup_teardown = setup_teardown_end - setup_teardown_start\n \n mean = total_time / iterations\n \n return mean", "The following functions are used as wrappers to make it easy to run a benchmark of Exdir or h5py:", "import pandas as pd\nimport numpy as np\n\nall_results = []\n\ndef benchmark_both(function, iterations=10, name_validation=True):\n if name_validation:\n setup_exdir_ = setup_exdir\n name = function.__name__\n else:\n setup_exdir_ = setup_exdir_no_validation\n name = function.__name__ + \" (minimal name validation)\"\n \n exdir_mean = benchmark(\n target=lambda f, path: function(f),\n setup=setup_exdir_,\n teardown=teardown_exdir,\n iterations=iterations\n )\n hdf5_mean = benchmark(\n target=lambda f, path: function(f),\n setup=setup_h5py,\n teardown=teardown_h5py,\n iterations=iterations\n )\n \n result = pd.DataFrame(\n [(name, hdf5_mean, exdir_mean, hdf5_mean/exdir_mean)],\n columns=[\"Test\", \"h5py\", \"Exdir\", \"Ratio\"]\n )\n all_results.append(result)\n return result\n\ndef benchmark_exdir(function, iterations=10):\n exdir_mean = benchmark(\n target=lambda f, path: function(f),\n setup=setup_exdir,\n teardown=teardown_exdir,\n iterations=iterations\n )\n result = pd.DataFrame(\n [(function.__name__, np.nan, exdir_mean, np.nan)],\n columns=[\"Test\", \"h5py\", \"Exdir\", \"Ratio\"]\n )\n all_results.append(result)\n return result", "We are now ready to start running the different benchmarks.\nBenchmark functions\nThe following benchmark creates a small number of attributes.\nThis should be very fast with both h5py and Exdir:", "def add_few_attributes(obj):\n for i in range(5):\n obj.attrs[\"hello\" + str(i)] = \"world\"\n\nbenchmark_both(add_few_attributes)", "The following benchmark adds a larger number of attributes one-by-one.\nBecause Exdir needs to read back and rewrite the entire file in case someone changed it between each write, this is significantly slower with Exdir than h5py:", "def add_many_attributes(obj):\n for i in range(200):\n obj.attrs[\"hello\" + str(i)] = \"world\"\n\nbenchmark_both(add_many_attributes, 10)", "However, Exdir is capable of writing all attributes in one operation.\nThis makes writing the same attributes about as fast (or even faster than h5py).\nWriting a large number of attributes in a single operation is not possible with h5py.\nWe therefore need to run this only with Exdir:", "def add_many_attributes_single_operation(obj):\n attributes = {}\n for i in range(200):\n attributes[\"hello\" + str(i)] = \"world\"\n obj.attrs = attributes\n \nbenchmark_exdir(add_many_attributes_single_operation)", "Exdir also supports adding nested attributes, such as Python dictionaries, which is not supported by h5py:", "def add_attribute_tree(obj):\n tree = {}\n for i in range(100):\n tree[\"hello\" + str(i)] = \"world\"\n tree[\"intermediate\"] = {}\n intermediate = tree[\"intermediate\"]\n for level in range(10):\n level_str = \"level\" + str(level)\n intermediate[level_str] = {}\n intermediate = intermediate[level_str]\n intermediate = 42\n obj.attrs[\"test\"] = tree\n \nbenchmark_exdir(add_attribute_tree)", "The following benchmarks create a small, a medium, and a large dataset:", "def add_small_dataset(obj):\n data = np.zeros((100, 100, 100))\n obj.create_dataset(\"foo\", data=data)\n obj.close()\n \nbenchmark_both(add_small_dataset)\n\ndef add_medium_dataset(obj):\n data = np.zeros((1000, 100, 100))\n obj.create_dataset(\"foo\", data=data)\n obj.close()\n \nbenchmark_both(add_medium_dataset, 10)\n\ndef add_large_dataset(obj):\n data = np.zeros((1000, 1000, 100))\n obj.create_dataset(\"foo\", data=data)\n obj.close()\n \nbenchmark_both(add_large_dataset, 3)", "There is some overhead in creating the objects themselves.\nThis is rather small in h5py, but can be high in Exdir with name validation enabled.\nThis is because the name of every created object must be checked against all the existing objects in the same group:", "def create_many_objects(obj):\n for i in range(5000):\n group = obj.create_group(\"group{}\".format(i))\n\nbenchmark_both(create_many_objects, 3)", "Without minimal validation, this is almost as fast in Exdir as it is in h5py.\nMinimal name validation only checks if file with the exact same name exist in the folder:", "benchmark_both(create_many_objects, 3, name_validation=False)", "Not only the number of created objects matter.\nCreating them in a tree structure can also incur a performance penalty.\nThe following test creates an object tree:", "def create_large_tree(obj, level=0):\n if level > 4:\n return\n for i in range(3):\n group = obj.create_group(\"group_{}_{}\".format(i, level))\n data = np.zeros((10, 10, 10))\n group.create_dataset(\"dataset_{}_{}\".format(i, level), data=data)\n create_large_tree(group, level + 1)\n \nbenchmark_both(create_large_tree)", "The final benchmark tests writing a \"slice\" of a dataset, which means only a part of the entire dataset is modified.\nThis is typically fast in both h5py and in Exdir thanks to memory mapping.", "def write_slice(dataset):\n dataset[320:420, 0:300, 0:100] = np.ones((100, 300, 100))\n\ndef create_setup_dataset(setup_function):\n def setup():\n f, path = setup_function()\n data = np.zeros((1000, 500, 100))\n dataset = f.create_dataset(\"foo\", data=data)\n time.sleep(1) # allow changes to get flushed to disk\n return dataset, f, path\n return setup\n\nexdir_mean = benchmark(\n target=lambda dataset, f, path: write_slice(dataset),\n setup=create_setup_dataset(setup_exdir),\n teardown=lambda dataset, f, path: teardown_exdir(f, path),\n iterations=3\n)\n\nhdf5_mean = benchmark(\n target=lambda dataset, f, path: write_slice(dataset),\n setup=create_setup_dataset(setup_h5py),\n teardown=lambda dataset, f, path: teardown_h5py(f, path),\n iterations=3\n)\nresult = pd.DataFrame(\n [(\"write_slice\", hdf5_mean, exdir_mean, hdf5_mean/exdir_mean)],\n columns=[\"Test\", \"h5py\", \"Exdir\", \"Ratio\"]\n)\nall_results.append(result)\n\nresult", "Benchmark summary\nThe results are summarized in the following table:", "pd.concat(all_results)", "Profiling the largest differences\nWhile the performance of Exdir in many cases is close to h5py, there are a few cases that can be worth investigating further.\nFor instance, it might be interesting to know what takes most time in create_large_tree, which is about 2-3 times slower in Exdir than h5py:", "import cProfile\n\nf, path = setup_exdir()\ncProfile.run('create_large_tree(f)', sort=\"cumtime\")\nteardown_exdir(f, path)", "Here we see that create_dataset and create_group take up about 2/3 and 1/3 of the total run time, respectively.\nSome of the time in both of these are spent on building paths using pathlib and name validation.\nThe remaining time is mostly spent on writing the array header of the NumPy files.\nOnly a small amount of time is spent on actually writing files.\nIncreasing performance in this case will likely mean that we need to outperform pathlib in building paths and numpy in writing files.\nWhile it might be possible, it is also beneficial to stick with the existing, well-tested implementations of both of these libraries." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Santana9937/Classification_ML_Specialization
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
mit
[ "Predicting sentiment from product reviews\nIn this notebook, you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.\n\nUse DataFrames to do some feature engineering\nTrain a logistic regression model to predict the sentiment of product reviews.\nInspect the weights (coefficients) of a trained logistic regression model.\nMake a prediction (both class and probability) of sentiment for a new product review.\nGiven the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.\nInspect the coefficients of the logistic regression model and interpret their meanings.\nCompare multiple logistic regression models.\n\nImporting Libraries", "import os\nimport zipfile\nimport string\nimport numpy as np\nimport pandas as pd\nfrom sklearn import linear_model\nfrom sklearn.feature_extraction.text import CountVectorizer\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('darkgrid')\n%matplotlib inline", "Unzipping files with Amazon Baby Products Reviews\nThe dataset consists of baby product reviews from Amazon.com.", "# Put files in current direction into a list\nfiles_list = [f for f in os.listdir('.') if os.path.isfile(f)]\n\n# Filename of unzipped file\nunzipped_file = 'amazon_baby.csv'\n\n# If upzipped file not in files_list, unzip the file\nif unzipped_file not in files_list:\n zip_file = unzipped_file + '.zip'\n unzipping = zipfile.ZipFile(zip_file)\n unzipping.extractall()\n unzipping.close", "Loading the products data\nThe dataset is loaded into a Pandas DataFrame called products.", "products = pd.read_csv(\"amazon_baby.csv\")", "Now, let us see a preview of what the dataset looks like.", "products.head()", "Performing text cleaning\nLet us explore a specific example of a baby product.", "products.ix[1]", "Now, we will perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string functionality.\nTransform the reviews into word-counts.\n\nAside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as \"I'd\", \"would've\", \"hadn't\" and so forth. See this page for an example of smart handling of punctuations.\nBefore removing the punctuation from the strings in the review column, we will fall all NA values with empty string.", "products[\"review\"] = products[\"review\"].fillna(\"\")", "Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.", "products[\"review_clean\"] = products[\"review\"].str.translate(None, string.punctuation) ", "Extract sentiments\nWe will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.", "products = products[products['rating'] != 3]\nlen(products)", "Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.\nBelow, we are create a function we will applyi to the \"ratings\" column of the dataframe to determine if the review is positive or negative.", "def sent_func(x):\n # If rating is >=4, return a positive sentiment (+1)\n if x>=4:\n return 1\n # Else, return a negative sentiment (-1)\n else:\n return -1", "Creating a \"sentiment\" column by applying the sent_func to the \"rating\" column in the dataframe.", "products['sentiment'] = products['rating'].apply(sent_func)\n\nproducts.ix[20:22]", "Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).\nSplit data into training and test sets\nLet's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set.\nLoading the indicies for the train and test data and putting them in a list", "with open('module-2-assignment-train-idx.txt', 'r') as train_file:\n ind_list_train = map(int,train_file.read().split(',')) \n\nwith open('module-2-assignment-test-idx.txt', 'r') as test_file:\n ind_list_test = map(int,test_file.read().split(','))", "Using the indicies of the train and test data to create the train and test datasets.", "train_data = products.iloc[ind_list_train,:]\ntest_data = products.iloc[ind_list_test,:]\n\nprint len(train_data)\nprint len(test_data)", "Build the word count vector for each review\nWe will now compute the word count for each word that appears in the reviews. A vector consisting of word counts is often referred to as bag-of-word features. Since most words occur in only a few reviews, word count vectors are sparse. For this reason, scikit-learn and many other tools use sparse matrices to store a collection of word count vectors. Refer to appropriate manuals to produce sparse word count vectors. General steps for extracting word count vectors are as follows:\n\nLearn a vocabulary (set of all words) from the training data. Only the words that show up in the training data will be considered for feature extraction.\nCompute the occurrences of the words in each review and collect them into a row vector.\nBuild a sparse matrix where each row is the word count vector for the corresponding review. Call this matrix train_matrix.\nUsing the same mapping between words and columns, convert the test data into a sparse matrix test_matrix.\n\nThe following cell uses CountVectorizer in scikit-learn. Notice the token_pattern argument in the constructor.", "# Use this token pattern to keep single-letter words\nvectorizer = CountVectorizer(token_pattern=r'\\b\\w+\\b')\n# First, learn vocabulary from the training data and assign columns to words\n# Then convert the training data into a sparse matrix\ntrain_matrix = vectorizer.fit_transform(train_data['review_clean'])\n# Second, convert the test data into a sparse matrix, using the same word-column mapping\ntest_matrix = vectorizer.transform(test_data['review_clean'])", "Train a sentiment classifier with logistic regression\nWe will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target.\nNote: This line may take 1-2 minutes.\nCreating an instance of the LogisticRegression class", "logreg = linear_model.LogisticRegression()", "Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model sentiment_model.", "sentiment_model = logreg.fit(train_matrix, train_data[\"sentiment\"])", "Putting all the weights from the model into a numpy array.", "weights_list = list(sentiment_model.intercept_) + list(sentiment_model.coef_.flatten())\nweights_sent_model = np.array(weights_list, dtype = np.double)\nprint len(weights_sent_model)", "There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. \nQuiz question: How many weights are >= 0?", "num_positive_weights = len(weights_sent_model[weights_sent_model >= 0.0])\nnum_negative_weights = len(weights_sent_model[weights_sent_model < 0.0])\n\nprint \"Number of positive weights: %i\" % num_positive_weights\nprint \"Number of negative weights: %i\" % num_negative_weights", "Making predictions with logistic regression\nNow that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.", "sample_test_data = test_data.ix[[59,71,91]]\nprint sample_test_data['rating']\nsample_test_data", "Let's dig deeper into the first row of the sample_test_data. Here's the full review:", "sample_test_data['review'].ix[59]", "That review seems pretty positive.\nNow, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.", "sample_test_data['review'].ix[71]", "We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:\n$$\n\\mbox{score}_i = \\mathbf{w}^T h(\\mathbf{x}_i)\n$$ \nwhere $h(\\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores . For each row, the score (or margin) is a number in the range [-inf, inf].", "sample_test_matrix = vectorizer.transform(sample_test_data['review_clean'])\nscores = sentiment_model.decision_function(sample_test_matrix)\nprint scores", "Predicting sentiment\nThese scores can be used to make class predictions as follows:\n$$\n\\hat{y} = \n\\left{\n\\begin{array}{ll}\n +1 & \\mathbf{w}^T h(\\mathbf{x}_i) > 0 \\\n -1 & \\mathbf{w}^T h(\\mathbf{x}_i) \\leq 0 \\\n\\end{array} \n\\right.\n$$\nUsing scores, write code to calculate $\\hat{y}$, the class predictions:", "pred_sent_test_data = []\nfor val in scores:\n if val>0:\n pred_sent_test_data.append(1)\n else:\n pred_sent_test_data.append(-1)\nprint pred_sent_test_data ", "Checkpoint: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from Scikit-Learn.", "print \"Class predictions according to Scikit-Learn:\" \nprint sentiment_model.predict(sample_test_matrix)", "Probability predictions\nRecall from the lectures that we can also calculate the probability predictions from the scores using:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))}.\n$$\nUsing the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].", "prob_pos_score = 1.0/(1.0 + np.exp(-scores))\nprob_pos_score", "Checkpoint: Make sure your probability predictions match the ones obtained from Scikit-Learn.", "print \"Class predictions according to Scikit-Learn:\" \nprint sentiment_model.predict_proba(sample_test_matrix)[:,1]", "Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?\n The 3rd data point has the lowest probability of being positive \nFind the most positive (and negative) review\nWe now turn to examining the full test dataset, test_data.\nUsing the sentiment_model, find the 40 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the \"most positive reviews.\"\nTo calculate these top-40 reviews, use the following steps:\n1. Make probability predictions on test_data using the sentiment_model.\n2. Sort the data according to those predictions and pick the top 40. \nComputing the scores with the sentiment_model decision function and then calculating the probability that y = +1", "scores_test_data = sentiment_model.decision_function(test_matrix)\nprob_test_data = 1.0/(1.0 + np.exp(-scores_test_data))", "To find the 40 most positive and the 40 most negative values, we will create a list of tuples with the entries (probability, index). We will then sort the list and will be able to extract the indicies corresponding to each entry.", "# List of indicies in the test data\nind_vals_test_data = test_data.index.values\n# Empty list that will be filled with the tuples (probability, index)\nscore_label_lst_test = len(scores_test_data)*[-1]", "Filling the list of tuples with the (probability, index) values", "for i in range(len(scores_test_data)):\n score_label_lst_test[i] = (prob_test_data[i],ind_vals_test_data[i])", "Sorting the list with the entries (probability, index)", "score_label_lst_test.sort()", "Extracting the top 40 positive reviews and the top 40 negative reviews", "top_40_pos_test_rev = score_label_lst_test[-40:]\ntop_40_neg_test_rev = score_label_lst_test[0:40]", "Getting the indicies of the top 40 positive reviews.", "ind_top_40_pos_test = 40*[-1]\nfor i,val in enumerate(top_40_pos_test_rev):\n ind_top_40_pos_test[i] = val[1]", "Getting the indicies of the top 40 negative reviews.", "ind_top_40_neg_test = 40*[-1]\nfor i,val in enumerate(top_40_neg_test_rev):\n ind_top_40_neg_test[i] = val[1]", "Quiz Question: Which of the following products are represented in the 40 most positive reviews? [multiple choice]", "test_data.ix[ind_top_40_pos_test][\"name\"]", "Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]", "test_data.ix[ind_top_40_neg_test][\"name\"]", "Compute accuracy of the classifier\nWe will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by\n$$\n\\mbox{accuracy} = \\frac{\\mbox{# correctly classified examples}}{\\mbox{# total examples}}\n$$\nThis can be computed as follows:\n\nStep 1: Use the trained model to compute class predictions\nStep 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).\nStep 3: Divide the total number of correct predictions by the total number of data points in the dataset.\n\nComplete the function below to compute the classification accuracy:", "def get_classification_accuracy(model, data, true_labels):\n \n # Constructing the wordcount vector\n data_matrix = vectorizer.transform(data['review_clean'])\n \n # Getting the predictions\n preds_data = model.predict(data_matrix)\n \n # Computing the number of correctly classified examples and the total examples\n n_correct = float(np.sum(preds_data == true_labels.values))\n n_total = float(len(preds_data))\n\n # Computing the accuracy by dividing number of \n #correctly classified examples by total number of examples\n accuracy = n_correct/n_total\n \n return accuracy", "Now, let's compute the classification accuracy of the sentiment_model on the test_data.", "acc_sent_mod_test = get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])\nprint acc_sent_mod_test", "Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).", "print \"Accuracy on Test Data: %.2f\" %(acc_sent_mod_test)", "Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?\n No, you may be overfitting. \nNow, computing the accuracy of the sentiment model on the training data for a future quiz question.", "acc_sent_mod_train = get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])\nprint acc_sent_mod_train", "Finding the weights of significant words for the sentiment_model.\nIn this section, we will find the weights of significant words for the sentiment_model.\nCreating a vocab list. The vocab list constains all the words used for the sentiment_model", "vocab = vectorizer.get_feature_names()\nprint len(vocab)", "Creating a list of the significant words in the utf-8 format", "un_sig_words = [u'love', u'great', u'easy', u'old', u'little', u'perfect', u'loves', \n u'well', u'able', u'car', u'broke', u'less', u'even', u'waste', u'disappointed', \n u'work', u'product', u'money', u'would', u'return']", "Creating a list that will store all the indicies where the significant words appear in the vocab list.", "ind_vocab_sig_words = []", "Finding the index where each significant word appears.", "for word in un_sig_words:\n ind_vocab_sig_words.append(vocab.index(word))", "Creating an empty list that will store the weights of the significant words. Then, using the index to find the weight for each signigicant word.", "ws_sent_mod_sig_words = []\nfor ind in ind_vocab_sig_words:\n ws_sent_mod_sig_words.append(sentiment_model.coef_.flatten()[ind])", "Creating a series that will store the weights of the significant words and displaying this Series.", "ws_sent_mod_ser = pd.Series(data=ws_sent_mod_sig_words, index=un_sig_words)\nws_sent_mod_ser", "Learn another classifier with fewer words\nThere were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:", "significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', \n 'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', \n 'work', 'product', 'money', 'would', 'return']\n\nlen(significant_words)", "Compute a new set of word count vectors using only these words. The CountVectorizer class has a parameter that lets you limit the choice of words when building word count vectors:", "vectorizer_word_subset = CountVectorizer(vocabulary=significant_words) # limit to 20 words\ntrain_matrix_word_subset = vectorizer_word_subset.fit_transform(train_data['review_clean'])\ntest_matrix_word_subset = vectorizer_word_subset.transform(test_data['review_clean'])", "Train a logistic regression model on a subset of data\nWe will now build a classifier with word_count_subset as the feature and sentiment as the target. \nCreating an instance of the LogisticRegression class. Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model simple_model.", "log_reg = linear_model.LogisticRegression()\nsimple_model = logreg.fit(train_matrix_word_subset, train_data[\"sentiment\"])", "Getting the weights for the 20 significant words from the simple_model", "ws_simp_model = list(simple_model.coef_.flatten())", "Putting the weights in a Series with the words corresponding to the weights as the index.", "ws_simp_mod_ser = pd.Series(data=ws_simp_model, index=significant_words)\nws_simp_mod_ser", "Quiz Question: Consider the coefficients of simple_model. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?", "print len(simple_model.coef_[simple_model.coef_>0])", "Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?\n Yes, see weights below for the significant words for the sentiment model", "ws_sent_mod_ser", "Comparing models\nWe will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.\nFirst, compute the classification accuracy of the sentiment_model on the train_data:", "acc_sent_mod_train", "Now, compute the classification accuracy of the simple_model on the train_data:", "preds_simp_mod_train = simple_model.predict(train_matrix_word_subset)\nn_cor_preds_simp_mod_train = float(np.sum(preds_simp_mod_train == train_data['sentiment'].values))\nn_tol_preds_simp_mod_train = float(len(preds_simp_mod_train))\nacc_simp_mod_train = n_cor_preds_simp_mod_train/n_tol_preds_simp_mod_train\nprint acc_simp_mod_train", "Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?", "if acc_sent_mod_train>acc_simp_mod_train:\n print \"sentiment_model\"\nelse:\n print \"simple_model\"", "Now, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:", "acc_sent_mod_test", "Next, we will compute the classification accuracy of the simple_model on the test_data:", "preds_simp_mod_test = simple_model.predict(test_matrix_word_subset)\nn_cor_preds_simp_mod_test = float(np.sum(preds_simp_mod_test == test_data['sentiment'].values))\nn_tol_preds_simp_mod_test = float(len(preds_simp_mod_test))\nacc_simp_mod_test = n_cor_preds_simp_mod_test/n_tol_preds_simp_mod_test\nprint acc_simp_mod_test", "Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?", "if acc_sent_mod_test>acc_simp_mod_test:\n print \"sentiment_model\"\nelse:\n print \"simple_model\"", "Baseline: Majority class prediction\nIt is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.\nWhat is the majority class in the train_data?", "num_positive = (train_data['sentiment'] == +1).sum()\nnum_negative = (train_data['sentiment'] == -1).sum()\nacc_pos_train = float(num_positive)/float(len(train_data['sentiment']))\nacc_neg_train = float(num_negative)/float(len(train_data['sentiment']))\nif acc_pos_train>acc_neg_train:\n print \"Positive Sentiment is Majority Classifier for Training Data\"\nelse:\n print \"Negative Sentiment is Majority Classifier for Training Data\"", "Now compute the accuracy of the majority class classifier on test_data.\nQuiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).", "num_pos_test = (test_data['sentiment'] == +1).sum()\nacc_pos_test = float(num_pos_test)/float(len(test_data['sentiment']))\nprint \"Accuracy of Majority Class Classifier on Test Data: %.2f\" %(acc_pos_test)", "Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?", "if acc_sent_mod_test>acc_pos_test:\n print \"Yes, the sentiment_model is better than majority class classifier\"\nelse:\n print \"No, the majority class classifier is better than sentiment_model\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oroszgy/oroszgy.github.io
content/handouts/concurrency-exercise.ipynb
mit
[ "Benchmarking your code", "def fun():\n max(range(1000))", "Using magic functions of Jupyter and timeit\n\nhttps://docs.python.org/3.5/library/timeit.html\nhttps://ipython.org/ipython-doc/3/interactive/magics.html#magic-time", "%%timeit\nfun()\n\n%%time\nfun()", "Exercises\n\nWhat is the fastest way to download 100 pages from index.hu?\nHow to calculate the factors of 1000 random integers effectively using factorize_naive function below?", "import requests\ndef get_page(url): \n response = requests.request(url=url, method=\"GET\")\n return response\nget_page(\"http://index.hu\")\n\ndef factorize_naive(n):\n \"\"\" A naive factorization method. Take integer 'n', return list of\n factors.\n \"\"\"\n if n < 2:\n return []\n factors = []\n p = 2\n\n while True:\n if n == 1:\n return factors\n\n r = n % p\n if r == 0:\n factors.append(p)\n n = n // p\n elif p * p >= n:\n factors.append(n)\n return factors\n elif p > 2:\n # Advance in steps of 2 over odd numbers\n p += 2\n else:\n # If p == 2, get to 3\n p += 1\n assert False, \"unreachable\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jimregan/tesseract-gle-uncial
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
apache-2.0
[ "<a href=\"https://colab.research.google.com/github/jimregan/tesseract-gle-uncial/blob/master/Update_gle_uncial_traineddata_for_Tesseract_4.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nGrab this for later", "!wget https://github.com/jimregan/tesseract-gle-uncial/releases/download/v0.1beta2/gle_uncial.traineddata", "Install dependencies", "!apt-get install libicu-dev libpango1.0-dev libcairo2-dev libleptonica-dev\n", "Clone, compile and set up Tesseract", "!git clone https://github.com/tesseract-ocr/tesseract\n\nimport os\nos.chdir('tesseract')\n\n!sh autogen.sh\n\n!./configure --disable-graphics\n\n\n!make -j 8\n!make install\n!ldconfig\n!make training\n!make training-install", "Grab some things to scrape the RIA corpus", "import os\nos.chdir('/content')\n!git clone https://github.com/jimregan/tesseract-gle-uncial/\n\n!apt-get install lynx", "Scrape the RIA corpus", "! for i in A B C D E F G H I J K L M N O P Q R S T U V W X Y Z;do lynx -dump \"http://corpas.ria.ie/index.php?fsg_function=1&fsg_page=$i\" |grep http://corpas.ria.ie|awk '{print $NF}' >> list;done\n\n!grep 'function=3' list |sort|uniq|grep corpas.ria|sed -e 's/function=3/function=5/' > input\n\n!wget -x -c -i input\n\n!mkdir text\n!for i in corpas.ria.ie/*;do id=$(echo $i|awk -F'=' '{print $NF}');cat $i | perl /content/tesseract-gle-uncial/scripts/extract-ria.pl > text/$id.txt;done", "Get the raw corpus in a single text file", "!cat text/*.txt|grep -v '^$' > ria-raw.txt\n", "Compress the raw text; this can be downloaded through the file browser on the left, so the scraping steps can be skipped in future", "!gzip ria-raw.txt", "...and can be re-added using the upload feature in the file browser", "!gzip -d ria-raw.txt.gz", "This next part is so I can update the langdata files", "import os\nos.chdir('/content')\n!git clone https://github.com/tesseract-ocr/langdata\n\n!cat ria-raw.txt | perl /content/tesseract-gle-uncial/scripts/toponc.pl > ria-ponc.txt\n\n!mkdir genwlout\n\n!perl /content/tesseract-gle-uncial/scripts/genlangdata.pl -i ria-ponc.txt -d genwlout -p gle_uncial\n\nimport os\nos.chdir('/content/genwlout')\n#!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cat $i.unsorted | awk -F'\\t' '{print $1}' | sort | uniq > $i.sorted;done\n!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cat $i.sorted /content/langdata/gle_uncial/$i | sort | uniq > $i;done\n\n!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cp $i /content/langdata/gle_uncial/;done\n\nGrab the fonts\n\nimport os\nos.chdir('/content')\n!mkdir fonts\nos.chdir('fonts')\n!wget -i /content/tesseract-gle-uncial/fonts.txt\n\n!for i in *.zip; do unzip $i;done", "Generate", "os.chdir('/content')\n!mkdir unpack\n!combine_tessdata -u /content/gle_uncial.traineddata unpack/gle_uncial.\n\nos.chdir('unpack')\n!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cp /content/genwlout/$i .;done\n\n!wordlist2dawg gle_uncial.numbers gle_uncial.lstm-number-dawg gle_uncial.lstm-unicharset\n!wordlist2dawg gle_uncial.punc gle_uncial.lstm-punc-dawg gle_uncial.lstm-unicharset\n!wordlist2dawg gle_uncial.wordlist gle_uncial.lstm-word-dawg gle_uncial.lstm-unicharset\n\n!rm gle_uncial.numbers gle_uncial.word.bigrams gle_uncial.punc gle_uncial.wordlist\n\nos.chdir('/content')\n!mv gle_uncial.traineddata gle_uncial.traineddata.orig\n!combine_tessdata unpack/gle_uncial.\n\nos.chdir('/content')\n!bash /content/tesseract/src/training/tesstrain.sh\n\n!text2image --fonts_dir fonts --list_available_fonts\n\n!cat genwlout/gle_uncial.wordlist.unsorted|awk -F'\\t' '{print $2 \"\\t\" $1'}|sort -nr > freqlist\n\n!cat freqlist|awk -F'\\t' '{print $2}'|grep -v '^$' > wordlist\n\n!cat ria-ponc.txt|sort|uniq|head -n 400000 > gle_uncial.training_text\n\n!cp unpack/gle_uncial.traineddata /usr/share/tesseract-ocr/4.00/tessdata\n\n!cp gle_uncial.trainingtext langdata/gle_uncial/\n\n!mkdir output\n\n!bash tesseract/src/training/tesstrain.sh --fonts_dir fonts --lang gle_uncial --linedata_only --noextract_font_properties --langdata_dir langdata --tessdata_dir /usr/share/tesseract-ocr/4.00/tessdata --output_dir output" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
atulsingh0/MachineLearning
Sklearn_MLPython/CH01.ipynb
gpl-3.0
[ "Machine Learning – A Gentle Introduction", "# import\nfrom sklearn.datasets import load_iris\nfrom sklearn.linear_model import SGDClassifier\nfrom sklearn.cross_validation import train_test_split, KFold, cross_val_score\nfrom sklearn.metrics import accuracy_score, classification_report, confusion_matrix\nfrom sklearn import preprocessing, pipeline\n\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline\nsns.set()\n\n#Loading the IRIS dataset\niris_data = load_iris()\n\nX = iris_data['data']\ny = iris_data['target']\n\nprint(iris_data['feature_names'])\nprint(iris_data['target_names'])\n\n# splitting and Pre-Processing the data\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33)\nprint(X_train[:2])\nprint(\"X_train shape\", X_train.shape)\nprint(\"X_test shape\", X_test.shape)\n\n# Preprocessing and Standardize the features\nscaler = preprocessing.StandardScaler().fit(X_train)\n\nX_train = scaler.transform(X_train)\nX_test = scaler.transform(X_test)\nprint(X_train[:2])", "SGDClassifier \nSGD stands for Stochastic Gradient Descent, a very popular numerical procedure \nto find the local minimum of a function (in this case, the loss function, which \nmeasures how far every instance is from our boundary). The algorithm will learn the \ncoefficients of the hyperplane by minimizing the loss function.", "# instantiate\nsgd = SGDClassifier()\n\n# fitting\nsgd.fit(X_train, y_train)\n\n# coefficient\nprint(\"coefficient\", sgd.coef_)\n\n# intercept\nprint(\"intercept: \", sgd.intercept_)\n\n# predicting for one\ny_pred = sgd.predict(scaler.transform([[4.9,3.1,1.5,0.1]]))\nprint(y_pred)\n\n# predicting for X_test\ny_pred = sgd.predict(X_test)\n\n# checking accuracy score\nprint(\"Model Accuracy on Train data: \", accuracy_score(y_train, sgd.predict(X_train)))\nprint(\"Model Accuracy on Test data: \", accuracy_score(y_test, y_pred))\n\n\n# let's plot the data\nplt.figure(figsize=(8,6))\n\nplt.scatter(X_train[:,0][y_train==0],X_train[:,1][y_train==0],color='red', label='setosa')\nplt.scatter(X_train[:,0][y_train==1],X_train[:,1][y_train==1],color='blue', label='verginica')\nplt.scatter(X_train[:,0][y_train==2],X_train[:,1][y_train==2],color='green', label='versicolour')\n\nplt.legend(loc='best')", "Classification Report \nAccuracy = (TP+TN)/m \nPrecision = TP/(TP+FP) \nRecall = TP/(TP+FN) \nF1-score = 2 * Precision * Recall / (Precision + Recall)", "# predicting \nprint(classification_report(y_pred=y_pred, y_true=y_test))\n\nconfusion_matrix(y_pred=y_pred, y_true=y_test)", "Using a pipeline mechanism to build and test our model", "# create a composite estimator made by a pipeline of the standarization and the linear model\nclf = pipeline.Pipeline([\n ('scaler', preprocessing.StandardScaler()),\n ('linear_model', SGDClassifier())\n])\n\n# create a k-fold cross validation iterator of k=5 folds\ncv = KFold(X.shape[0], 5, shuffle=True, random_state=33)\n\n# by default the score used is the one returned by score method of the estimator (accuracy)\nscores = cross_val_score(clf, X, y, cv=cv)\n\nprint(scores)\n\n# mean accuracy \nprint(np.mean(scores), sp.stats.sem(scores))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sertansenturk/tomato
demos/audio_analysis_demo.ipynb
agpl-3.0
[ "import os\nimport copy\nfrom matplotlib import pyplot as plt\n\nfrom tomato.audio.audioanalyzer import AudioAnalyzer\n\n\n# instantiate\naudio_filename = os.path.join('..',\n 'sample-data',\n 'ussak--sazsemaisi--aksaksemai----neyzen_aziz_dede',\n 'f970f1e0-0be9-4914-8302-709a0eac088e',\n 'f970f1e0-0be9-4914-8302-709a0eac088e.mp3')\n\naudioAnalyzer = AudioAnalyzer(verbose=True)", "You can use the single line call \"analyze,\" which does all the available analysis simultaneously", "# NOTE: This will take several minutes depending on the performance of your machine\naudio_features = audioAnalyzer.analyze(audio_filename)\n\n# plot the features\nplt.rcParams['figure.figsize'] = [20, 8]\n\naudioAnalyzer.plot(audio_features)\nplt.show()\n", "... or call all the methods individually", "# audio metadata extraction\nmetadata = audioAnalyzer.crawl_musicbrainz_metadata(audio_filename)\n\n# predominant melody extraction\npitch = audioAnalyzer.extract_pitch(audio_filename)\n\n# pitch post filtering\npitch_filtered = audioAnalyzer.filter_pitch(pitch)\n\n# histogram computation\npitch_distribution = audioAnalyzer.compute_pitch_distribution(pitch_filtered)\npitch_class_distribution = copy.deepcopy(pitch_distribution)\npitch_class_distribution.to_pcd()\n\n# tonic identification\ntonic = audioAnalyzer.identify_tonic(pitch_filtered)\n\n# get the makam from metadata if possible else apply makam recognition\nmakams = audioAnalyzer.get_makams(metadata, pitch_filtered, tonic)\nmakam = list(makams)[0] # for now get the first makam\n\n# transposition (ahenk) identification\ntransposition = audioAnalyzer.identify_transposition(tonic, makam)\n\n# stable note extraction (tuning analysis)\nnote_models = audioAnalyzer.compute_note_models(pitch_distribution, tonic, makam)\n\n# get the melodic progression model\nmelodic_progression = audioAnalyzer.compute_melodic_progression(pitch_filtered)\n" ]
[ "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/fio-ronm/cmip6/models/sandbox-2/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: FIO-RONM\nSource ID: SANDBOX-2\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:01\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-2', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/nitroml
examples/visualize_tuner_plots.ipynb
apache-2.0
[ "##### Copyright 2020 Google LLC.\n\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Visualize the MetaLearning pipeline built on top NitroML.\nWe are using NitroML on Kubeflow:\nThis notebook allows users to analyze NitroML metalearning pipelines results.", "# Step 1: Configure your cluster with gcloud\n# `gcloud container clusters get-credentials <cluster_name> --zone <cluster-zone> --project <project-id>\n\n# Step 2: Get the port where the gRPC service is running on the cluster\n# `kubectl get configmap metadata-grpc-configmap -o jsonpath={.data}`\n# Use `METADATA_GRPC_SERVICE_PORT` in the next step. The default port used is 8080.\n\n# Step 3: Port forwarding\n# `kubectl port-forward deployment/metadata-grpc-deployment 9898:<METADATA_GRPC_SERVICE_PORT>`\n\n# Troubleshooting\n# If getting error related to Metadata (For examples, Transaction already open). Try restarting the metadata-grpc-service using:\n# `kubectl rollout restart deployment metadata-grpc-deployment` \n\nimport sys, os\nPROJECT_DIR=os.path.join(sys.path[0], '..')\n%cd {PROJECT_DIR}\n\nimport json\n\nfrom examples import config as cloud_config\nimport examples.tuner_data_utils as tuner_utils\nfrom ml_metadata.proto import metadata_store_pb2\nfrom ml_metadata.metadata_store import metadata_store\nfrom nitroml.benchmark import results\nimport seaborn as sns\nimport tensorflow as tf\nimport qgrid\n\nsns.set()", "Connect to the ML Metadata (MLMD) database\nFirst we need to connect to our MLMD database which stores the results of our\nbenchmark runs.", "connection_config = metadata_store_pb2.MetadataStoreClientConfig()\n\nconnection_config.host = 'localhost'\nconnection_config.port = 9898\n\nstore = metadata_store.MetadataStore(connection_config)", "Get trial summary data (used to plot Area under Learning Curve) stored as AugmentedTuner artifacts.", "# Name of the dataset/subbenchmark\n# This is used to filter out the component path.\ntestdata = 'ilpd' \n\ndef get_metalearning_data(meta_algorithm: str = '', test_dataset: str = '', multiple_runs: bool = True):\n \n d_list = []\n execs = store.get_executions_by_type('nitroml.automl.metalearning.tuner.component.AugmentedTuner')\n model_dir_map = {}\n for tuner_exec in execs:\n\n run_id = tuner_exec.properties['run_id'].string_value\n pipeline_root = tuner_exec.properties['pipeline_root'].string_value\n component_id = tuner_exec.properties['component_id'].string_value\n pipeline_name = tuner_exec.properties['pipeline_name'].string_value\n \n if multiple_runs:\n if '.run_' not in component_id:\n continue\n \n if test_dataset not in component_id:\n continue\n \n if f'metalearning_benchmark' != pipeline_name and meta_algorithm not in pipeline_name:\n continue\n\n config_path = os.path.join(pipeline_root, component_id, 'trial_summary_plot', str(tuner_exec.id))\n model_dir_map[tuner_exec.id] = config_path\n d_list.append(config_path)\n \n return d_list\n\n# Specify the path to tuner_dir from above\n# You can get the list of tuner_dirs by calling: get_metalearning_data(multiple_runs=False)\nexample_plot = ''\nif not example_plot:\n raise ValueError('Please specify the path to the tuner plot dir.')\n \nwith tf.io.gfile.GFile(os.path.join(example_plot, 'tuner_plot_data.txt'), mode='r') as fin:\n data = json.load(fin)\n \ntuner_utils.display_tuner_data(data, save_plot=False)", "Majority Voting", "algorithm = 'majority_voting' \nd_list = get_metalearning_data(algorithm, testdata)\n\nd_list\n\n# Select the runs from `d_list` to visualize. \n\ndata_list = []\n\nfor d in d_list:\n with tf.io.gfile.GFile(os.path.join(d, 'tuner_plot_data.txt'), mode='r') as fin:\n data_list.append(json.load(fin))\n\ntuner_utils.display_tuner_data_with_error_bars(data_list, save_plot=True)", "Nearest Neighbor", "algorithm = 'nearest_neighbor' \nd_list = get_metalearning_data(algorithm, testdata)\n\nd_list\n\n# Select the runs from `d_list` to visualize. \n\ndata_list = []\n\nfor d in d_list:\n with tf.io.gfile.GFile(os.path.join(d, 'tuner_plot_data.txt'), mode='r') as fin:\n data_list.append(json.load(fin))\n\ntuner_utils.display_tuner_data_with_error_bars(data_list, save_plot=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
darkomen/TFG
medidas/04082015/estudio.datos.ipynb
cc0-1.0
[ "Análisis de los datos obtenidos\nUso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 11 de Agosto del 2015", "#Importamos las librerías utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerías\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Abrimos el fichero csv con los datos de la muestra\ndatos = pd.read_csv('841512.CSV')\n\n%pylab inline\n\n#Mostramos un resumen de los datos obtenidoss\ndatos.describe()\n#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]\n\n#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ncolumns = ['Diametro X', 'Diametro Y', 'RPM TRAC']\n\n#Mostramos en varias gráficas la información obtenida tras el ensayo\ndatos[columns].plot(subplots=True, figsize=(20,20))", "Representamos ambos diámetros en la misma gráfica", "datos.ix[:, \"Diametro X\":\"Diametro Y\"].plot(figsize=(16,3),ylim=(1.4,2)).hlines([1.85,1.65],0,3500,colors='r')\n\ndatos.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')", "Mostramos la representación gráfica de la media de las muestras", "pd.rolling_mean(datos[columns], 50).plot(subplots=True, figsize=(12,12))", "Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento", "plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')", "Filtrado de datos\nLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.", "datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]", "Representación de X/Y", "plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')", "Analizamos datos del ratio", "ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']\nratio.describe()\n\nrolling_mean = pd.rolling_mean(ratio, 50)\nrolling_std = pd.rolling_std(ratio, 50)\nrolling_mean.plot(figsize=(12,6))\n# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)\nratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))", "Límites de calidad\nCalculamos el número de veces que traspasamos unos límites de calidad. \n$Th^+ = 1.85$ and $Th^- = 1.65$", "Th_u = 1.85\nTh_d = 1.65\n\ndata_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |\n (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]\n\ndata_violations.describe()\n\ndata_violations.plot(subplots=True, figsize=(12,12))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
saketkc/hatex
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
mit
[ "Problem 2\nDefine $h(a) = P(\\tau_{\\phi} < \\tau_{\\dagger}) | X_0=a)$\nTo show: $P(X_{n+1}=b|X_n=a\\ and\\ \\tau_{\\phi} < \\tau_{\\dagger}) = \\frac{h(b)}{h(a)}P_{ab}$\nLHS: $P(X_{n+1}=b|X_n=a\\ and\\ \\tau_{\\phi} < \\tau_{\\dagger})$ \nIncorrect version\nLHS: \n~~\\begin{align}\n~~P(X_{1}=b|X_0=a\\ and\\ \\tau_{\\phi} < \\tau_{\\dagger}) &= P(X_{1}=b|X_0=a, \\tau_{\\phi} < \\tau_{\\dagger}|X_0=a)~~\\\n&=P(\\tau_{\\phi} < \\tau_{\\dagger}|X_0=a) \\times P(X_1=b|X_0=a)\\ \\text{if $a \\notin \\phi,\\dagger$}\\\n&=h(a) \\times P_{ab} \\\n&\\neq RHS??\n\\end{align}~~\nCorrected Version\nConsider \n$P(A|B,C)=P(X_{n+1}=b|X_n=a\\ and\\ \\tau_{\\phi} < \\tau_{\\dagger})$\nThen\n$P(A|B,C)= \\frac{P(A,B,C)}{P(B,C)}=\\frac{P(A,C|B)P(B)}{P(C|B)P(B)}=\\frac{P(A,C|B)}{P(C|B)} =\\frac{P(A|C) \\times{P(B|A,C)}}{P(B|C)}$\nThus,\n$$\nLHS=P(X_{n+1}=b|X_n=a\\ and\\ \\tau_{\\phi} < \\tau_{\\dagger}) = \\frac{P(X_{n+1}=b,\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a)}\n{P(\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a)}\n$$\nNow, $n > min(\\tau_{\\phi},\\tau_{\\dagger})$\nand hence:\n$$\n\\begin{align}\nP(X_{n+1}=b|X_n=a\\ and\\ \\tau_{\\phi} < \\tau_{\\dagger}) &= \\frac{P(X_{n+1}=b,\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a)}\n{P(\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a)} \\\n&= \\frac{P(X_{n+1}=b|X_n=a)\\times P(\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a,X_{n+1}=b)}{P(\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a)}\n\\end{align}\n$$\nUsing markov property and time homogeneity:\n$P(\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a,X_{n+1}=b)=P(\\tau_{\\phi} < \\tau_{\\dagger}|X_0=b)$\nand hence:\n$$\n\\begin{align}\nP(X_{n+1}=b|X_n=a\\ and\\ \\tau_{\\phi} < \\tau_{\\dagger}) &= \\frac{P(X_{n+1}=b|X_n=a)\\times P(\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a,X_{n+1}=b)}{P(\\tau_{\\phi} < \\tau_{\\dagger}|X_n=a)}\\\n&=\\frac{h(b)\\times P_{ab}}{h(a)}\\\n&= RHS\n\\end{align}\n$$\nProblem 3\nIf initial state $X_t=A$,\n$P(X_{t+1}=A) = 0.5$ and $P(X_{t+1}=A\\cup{b}-{a})=0.5$\nObservation 1: $X_t$ is irreducbile. The construction allows to reach every state from any state.\nExample: Let $A={1,2,3,4,5}$ for $n=10$ and $k=5$. let $a$=3 and let $b=6$\nThen we have: $P(X_{t+1}={1,2,3,4,5}) = 0.5$ and $P(X_{t+1}={1,2,4,5,x})=0.5*1/5$ where $x\\ \\in {6,7,8,9,10}$\nObservation 2: For $X$ to be aperioidic, it is imporatant to have the $X_{t+1}=X_t$ with probabulity 0.5(any non-zero probability would do). Otherwise the diagonal of the trasition probability matrix will be zero, and in such cases it is possible for the chain to be periodic. An example (without taking into account the actual transition probabilities) is:\nFor state space.${1,2,3,4}$\n$$\nP = \\begin{bmatrix}\n0 & 0.5 & 0 & 0.5\\\n0.5 & 0 & 0.5 & 0\\\n0 & 0.5 & 0 & 0.5\\\n0.5 & 0 & 0.5 & 0\\\n\\end{bmatrix}\n$$\nand $(P^2)_{ii}>0$\nIt is possible to return to the same state with a period of 2:\n$P(X_n=2|X_0=1)= 0 \\ for\\ $n=2k$\\ and\\ 1\\ for\\ $n=2k-1$\\ where\\ k=1,2,3...$\nAbout uniform stationary distribution\nFrom observations 1,2 we know that the markv chain is irreducible and aperiodic. There is another observation:\nObservation 3: $P$ the transition probabilty matrix is symmetric.\n$P_{ii} = 0.5$\n$P_{ij} = 0.5 * \\underbrace{\\frac{1}{|k|}}\\text{Probability of selecting 'i' uniformly} * \\underbrace{\\frac{1}{|A|-|k|}}\\text{Probability of selecting 'j' uniformly}$ $\\forall j \\neq i$\nand hence $P_{ij} =P_{ji}$ $\\implies$ $P=P^T$ $\\implies$ $\\pi$ is uniformly distributed (Because $P$ is reversible)\nProblem 4\nPart (4a)", "%matplotlib inline\nfrom __future__ import division\nimport numpy as np\nimport matplotlib.pyplot as plt\nnp.random.seed(1)\nD = np.random.rand(100,100)\n## This is not symmetric, so we make it symmetric\nD = (D+D.T)/2\nprint (D)\n\nimport math\nN_steps = 10000\n\ndef L(sigma):\n s=0\n for i in range(0, len(sigma)-1):\n s+=D[sigma[i], sigma[i+1]]\n return s\n\ndef propose(sigma):\n r = np.random.choice(len(sigma), 2)\n rs = np.sort(r)\n j,k=rs[0],rs[1]\n x=(sigma[j:k])#.reverse()\n x=x[::-1]\n x0= sigma[:j]\n x1 = sigma[k:]\n y=np.concatenate((x0,x,x1))\n return y\n\n\ndef pi(sigma,T):\n return math.exp(-L(sigma)/T)\n\ndef metropolis(sigma,T,L_0):\n sigma_n = propose(sigma)\n L_n = L(sigma_n)\n pi_ab = math.exp(-(L_n-L_0)/T)\n q = min(1, pi_ab)\n b = np.random.uniform(size=1)\n if (b<q):\n return sigma_n\n else:\n return sigma\n \n\nsigma_0 = np.random.choice(100,100)\nL_0 = L(sigma_0)\nprint sigma_0\n\nT = [0.05,10]\ndef plotter(t):\n L_history = []\n sigma_history = []\n sigma_0 = np.random.choice(100,100)\n L_0 = L(sigma_0)\n L_history.append(L_0)\n sigma_history.append(sigma_0)\n sigma = metropolis(sigma_0,t,L_0)\n for i in range(1, N_steps):\n sigma_t = metropolis(sigma_history[i-1],t,L_history[i-1])\n L_1 = L(sigma_t)\n L_history.append(L_1)\n sigma_history.append(sigma_t)\n plt.figure(0)\n\n plt.hist(L_history, 20)\n #plt.xlim(min(L_history)-25, max(L_history)+0.5)\n plt.xlabel('Length')\n plt.ylabel('Frequency')\n plt.title('Frequency of L')\n plt.figure(1)\n\n plt.plot(range(1, N_steps+1),L_history)\n plt.ylim(min(L_history), max(L_history))\n plt.xlabel('N_steps')\n plt.ylabel('L')\n plt.title('Variation of L with N_steps')\n return L_history", "T = 0.05", "L_t0=plotter(T[0])", "T=10", "L_t1= plotter(T[1])", "Correlation plots", "from scipy.signal import correlate\ndef autocorr(x):\n xunbiased = x-np.mean(x)\n xnorm = np.sum(xunbiased**2)\n acor = np.correlate(xunbiased, xunbiased, \"same\")/xnorm\n #result = correlate(x, x, mode='full')\n #result /= result[result.argmax()]\n acor = acor[len(acor)/2:]\n return acor#result[result.size/2:]\n\ncov_t0 = autocorr(L_t0)\ncov_t1 = autocorr(L_t1)\n\n\nplt.plot(cov_t0)\nplt.ylabel('Autocorrelation')\nplt.xlabel('N_steps')\nplt.title('Autocorrelation of L_i for T=0.05')\n\nplt.plot(cov_t1)\nplt.ylabel('Autocorrelation')\nplt.xlabel('N_steps')\nplt.title('Autocorrelation of L_i for T=10')", "Result\nThe autocorrelation seems to be high even for large values of $N_{step}$ for both the temperature values. I expected higher $T$ to yield lower autocorrelations.\nProblem 1\nLet the state space be $S = {\\phi, \\alpha, \\beta, \\alpha+\\beta, pol, \\dagger}$ \nDefinitions:\n1. $\\tau_a = { n \\geq 0: X_n=a}$\n\n\n$N = \\sum_{k=0}^{\\tau_{\\phi}}I_{X_k=\\dagger}$ \n\n\n$u(a) = E[N|X_0=a] \\forall a \\in S $\n\n\n$u(a) = \\sum_{k=0}^{\\tau_{\\phi}}P(X_k=\\dagger|X_0=a)=\\sum_{b \\neq a, \\dagger }P(X_1=b|X_0=a)P(X_k=\\dagger|X_0=b)$ $\\implies$ $u(a)=\\sum_{b \\neq a, \\dagger} P_{ab}u(b)$\nAnd hence $u$ solves the following set of equations:\n$u=(I-P_{-})^{-1}v$ where v is (0,0,0,1) in this case. and $P_{-}$ represents the matrix with that last and first row and columns removed.", "k_a=0.2\nk_b=0.2\nk_p=0.5\nP = np.matrix([[1-k_a-k_b, k_a ,k_b, 0, 0, 0],\n [k_a, 1-k_a-k_b, 0, k_b, 0, 0],\n [k_b, 0, 1-k_a-k_b, k_a, 0, 0],\n [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0],\n [0, 0, 0, 0, 0, 1],\n [0, 0, 0, 1, 0, 0]])\n\n\nQ=P[1:5,1:5]\niq = np.eye(4)-Q\niqi = np.linalg.inv(iq)\nprint(iq)\nprint(iqi)\n\nprint 'U={}'.format(iqi[:,-1])\nu=iqi[:,-1]\n\nPP = {}\nstates = ['phi', 'alpha', 'beta', 'ab', 'pol', 'd']\n\nPP['phi']= [1-k_a-k_b, k_a ,k_b, 0, 0, 0]\nPP['alpha'] = [k_a, 1-k_a-k_b, 0, k_b, 0, 0]\nPP['beta'] = [k_b, 0, 1-k_a-k_b, k_a, 0, 0]\nPP['ab']= [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0]\nPP['pol']= [0, 0, 0, 0, 0, 1]\nPP['d']= [0, 0, 0, 1, 0, 0]\ndef h(x):\n s=0\n ht=0\n cc=0\n for j in range(1,100):\n new_state=x\n for i in range(1,10000):\n old_state=new_state\n probs = PP[old_state]\n z=np.random.choice(6, 1, p=probs)\n new_state = states[z[0]]\n s+=z[0]\n if new_state=='d':\n ht+=i\n cc+=1\n break\n else:\n continue\n\n return s/1000, ht/cc\n\n", "$\\alpha$", "print('Simulation: {}\\t Calculation: {}'.format(h('alpha')[1],u[0]))", "$\\beta$", "print('Simulation: {}\\t Calculation: {}'.format(h('beta')[1],u[1]))", "$\\alpha+\\beta$", "print('Simulation: {}\\t Calculation: {}'.format(h('ab')[1],u[2]))", "pol", "print('Simulation: {}\\t Calculation: {}'.format(h('pol')[1],u[3]))", "Result\nThe simulation and calculation do not agree. The simulation implementation doesn't look correct. However, looking at $\\alpha$ and $\\beta$ results, the simulation and calculated results seem to be in-sync." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BYUFLOWLab/MDOnotebooks
SymbolicVsAD.ipynb
mit
[ "Symbolic Differentiation vs Automatic Differentiation\nConsider the function below that, at least computationally, is very simple.", "from math import sin, cos\n\ndef func(x):\n y = x\n for i in range(30):\n y = sin(x + y)\n\n return y", "We can compute a derivative symbolically, but it is of course horrendous (see below). Think of how much worse it would be if we chose a function with products, more dimensions, or iterated more than 20 times.", "from sympy import diff, Symbol, sin\nfrom __future__ import print_function\n\nx = Symbol('x')\ndexp = diff(func(x), x)\nprint(dexp)", "We can now evaluate the expression.", "xpt = 0.1\n\ndfdx = dexp.subs(x, xpt)\n\nprint('dfdx =', dfdx)", "Let's compare with automatic differentiation using operator overloading:", "from algopy import UTPM, sin\n\nx_algopy = UTPM.init_jacobian(xpt)\ny_algopy = func(x_algopy)\ndfdx = UTPM.extract_jacobian(y_algopy)\n \nprint('dfdx =', dfdx)", "Let's also compare to AD using a source code transformation method (I used Tapenade in Fortran)", "def funcad(x):\n xd = 1.0\n yd = xd\n y = x\n for i in range(30):\n yd = (xd + yd)*cos(x + y)\n y = sin(x + y)\n return yd\n\ndfdx = funcad(xpt)\n\nprint('dfdx =', dfdx)", "For a simple expression like this, symbolic differentiation is long but actually works reasonbly well, and both will give a numerically exact answer. But if we change the loop to 100 (go ahead and try this) or add other complications, the symbolic solver will fail. However, automatic differentiation will continue to work without issue (see the simple source code transformation version). Furthermore, if we add other dimensions to the problem, symbolic differentiation quickly becomes costly as lots of computations get repeated, whereas automatic differentiation is able to reuse a lot of calculations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
josdaza/deep-toolbox
TensorFlow/02_Linear_Regression.ipynb
mit
[ "Visualizando datos de entrada", "import numpy as np\nimport matplotlib.pyplot as plt\n\n# Regresa 101 numeros igualmmente espaciados en el intervalo[-1,1]\nx_train = np.linspace(-1, 1, 101)\n\n# Genera numeros pseudo-aleatorios multiplicando la matriz x_train * 2 y \n# sumando a cada elemento un ruido (una matriz del mismo tamanio con puros numeros random) \ny_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33\n\nprint(np.random.randn(*x_train.shape))\n\nplt.scatter(x_train, y_train)\nplt.show()", "Algoritmo de Regresion Lineal en TensorFlow", "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlearning_rate = 0.01\ntraining_epochs = 100\n\nx_train = np.linspace(-1,1,101)\ny_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33\n\nX = tf.placeholder(\"float\")\nY = tf.placeholder(\"float\")\n\ndef model(X,w):\n return tf.multiply(X,w)\n\nw = tf.Variable(0.0, name=\"weights\")\n\ny_model = model(X,w)\ncost = tf.square(Y-y_model)\n\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n\nsess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run(init)\n\nfor epoch in range(training_epochs):\n for (x,y) in zip(x_train, y_train):\n sess.run(train_op, feed_dict={X:x, Y:y})\n \nw_val = sess.run(w)\n\nsess.close()\n\nplt.scatter(x_train, y_train)\ny_learned = x_train*w_val\nplt.plot(x_train, y_learned, 'r')\nplt.show()", "Regresion Lineal en Polinomios de grado N", "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlearning_rate = 0.01\ntraining_epochs = 40\n\ntrX = np.linspace(-1, 1, 101)\nnum_coeffs = 6\ntrY_coeffs = [1, 2, 3, 4, 5, 6]\ntrY = 0\n\n#Construir datos polinomiales pseudo-aleatorios para probar el algoritmo\nfor i in range(num_coeffs):\n trY += trY_coeffs[i] * np.power(trX, i)\n trY += np.random.randn(*trX.shape) * 1.5\n \nplt.scatter(trX, trY)\nplt.show()\n\n# Construir el grafo para TensorFlow\nX = tf.placeholder(\"float\")\nY = tf.placeholder(\"float\")\n\ndef model(X, w):\n terms = []\n for i in range(num_coeffs):\n term = tf.multiply(w[i], tf.pow(X, i))\n terms.append(term)\n return tf.add_n(terms)\n\nw = tf.Variable([0.] * num_coeffs, name=\"parameters\")\ny_model = model(X, w)\ncost = (tf.pow(Y-y_model, 2))\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n\n#Correr el Algoritmo en TensorFlow\nsess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run(init)\n\nfor epoch in range(training_epochs):\n for (x, y) in zip(trX, trY):\n sess.run(train_op, feed_dict={X: x, Y: y})\n\nw_val = sess.run(w)\nprint(w_val)\nsess.close()\n\n# Mostrar el modelo construido\nplt.scatter(trX, trY)\ntrY2 = 0\nfor i in range(num_coeffs):\n trY2 += w_val[i] * np.power(trX, i)\n\nplt.plot(trX, trY2, 'r')\nplt.show()", "Regularizacion\nPara manejar un poco mejor el impacto que tienen los outliers sobre nuestro modelo (y asi evitar que el modelo produzca curvas demasiado complicadas, y el overfitting) existe el termino Regularizacion que se define como:\n$$ Cost(X,Y) = Loss(X,Y) + \\lambda |x| $$\nen donde |x| es la norma del vector (la distancia del vector al origen, ver el tema de Norms en otro lado, por ejemplo L1 o L2 norm) que se utiliza como cantidad penalizadora y lambda es como parametro para ajustar que tanto afectara la penalizacion. Entre mas grande sea lambda mas penalizado sera ese punto, y si lambda es 0 entonces se tiene el modelo inicial que no aplica reguarizacion.\nPara obtener un valor optimo de gama, se tiene que hacer un split al dataset y...", "import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef split_dataset(x_dataset, y_dataset, ratio):\n arr = np.arange(x_dataset.size)\n np.random.shuffle(arr)\n num_train = int(ratio* x_dataset.size)\n x_train = x_dataset[arr[0:num_train]]\n y_train = y_dataset[arr[0:num_train]]\n x_test = x_dataset[arr[num_train:x_dataset.size]]\n y_test = y_dataset[arr[num_train:x_dataset.size]]\n return x_train, x_test, y_train, y_test\n\nlearning_rate = 0.001\ntraining_epochs = 1000\nreg_lambda = 0.\n\nx_dataset = np.linspace(-1, 1, 100)\n\nnum_coeffs = 9\ny_dataset_params = [0.] * num_coeffs\ny_dataset_params[2] = 1\ny_dataset = 0\n\nfor i in range(num_coeffs):\n y_dataset += y_dataset_params[i] * np.power(x_dataset, i)\ny_dataset += np.random.randn(*x_dataset.shape) * 0.3\n\n(x_train, x_test, y_train, y_test) = split_dataset(x_dataset, y_dataset, 0.7)\nX = tf.placeholder(\"float\")\nY = tf.placeholder(\"float\")\n\ndef model(X, w):\n terms = []\n for i in range(num_coeffs):\n term = tf.multiply(w[i], tf.pow(X,i))\n terms.append(term)\n return tf.add_n(terms)\n\nw = tf.Variable([0.] * num_coeffs, name=\"parameters\")\ny_model = model(X, w)\ncost = tf.div(tf.add(tf.reduce_sum(tf.square(Y-y_model)),\n tf.multiply(reg_lambda, tf.reduce_sum(tf.square(w)))), \n 2*x_train.size)\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)\n\nsess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run(init)\n\ni,stop_iters = 0,15\nfor reg_lambda in np.linspace(0,1,100):\n i += 1\n for epoch in range(training_epochs):\n sess.run(train_op, feed_dict={X: x_train, Y: y_train})\n final_cost = sess.run(cost, feed_dict={X: x_test, Y:y_test})\n print('reg lambda', reg_lambda)\n print('final cost', final_cost)\n if i > stop_iters: break\n\nsess.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tjwei/PythonTutorial
Tutorial 3.ipynb
mit
[ "Zebra Puzzle\n\n有五間房子\n英國人住紅色的房子\n西班牙人養狗\n住綠色房子的人喝咖啡\n烏克蘭人喝茶\n綠色房子緊鄰的左邊(你的右邊)是白色房子\n抽「Old Gold」牌香菸的人養蝸牛\n黃色房子的人抽「Kools」牌香菸\n正中間房子的人喝牛奶\n挪威人住左邊(你的右邊)第一間房子\n抽「Chesterfields」牌香菸的人,住在養狐狸的人的隔壁\n抽「Kools」牌香菸的人,住在養馬的人隔壁\n抽「Lucky Strike」牌香菸的人,喝橘子汁\n日本人抽「Parliament」牌香菸\n挪威人住在藍色房子的隔壁\n\n Question:誰喝水? 誰養斑馬?", "import itertools \n屋子 = 第一間, _, 中間, _, _ = [1, 2, 3, 4, 5]\n所有順序 = list(itertools.permutations(屋子))\n所有順序\n\ndef 在右邊(h1, h2):\n \"h1 緊鄰 h2 的右邊.\"\n return h1-h2 == 1\n\ndef 隔壁(h1, h2):\n \"h1 h2 在隔壁\"\n return abs(h1-h2) == 1\n\ndef zebra_puzzle(): \n return [locals()\n for (紅, 綠, 白, 黃, 藍) in 所有順序\n for (英國人, 西班牙人, 烏克蘭人, 日本人, 挪威人) in 所有順序\n for (咖啡, 茶, 牛奶, 橘子汁, 水) in 所有順序\n for (OldGold, Kools, Chesterfields, LuckyStrike, Parliaments) in 所有順序\n for (狗, 蝸牛, 狐狸, 馬, 斑馬) in 所有順序 \n if 英國人 is 紅 #2\n if 西班牙人 is 狗 #3\n if 咖啡 is 綠 #4\n if 烏克蘭人 is 茶 #5\n if 在右邊(綠, 白) #6 \n if OldGold is 蝸牛 #7\n if Kools is 黃 #8\n if 牛奶 is 中間 #9\n if 挪威人 is 第一間 #10\n if 隔壁(Chesterfields, 狐狸) #11\n if 隔壁(Kools, 馬) #12\n if LuckyStrike is 橘子汁 #13\n if 日本人 is Parliaments #14 \n if 隔壁(挪威人, 藍) #15 \n ]\nzebra_puzzle()", "時間太長!", "def zebra_puzzle(): \n return [locals()\n for (紅, 綠, 白, 黃, 藍) in 所有順序\n if 在右邊(綠, 白) #6\n for (英國人, 西班牙人, 烏克蘭人, 日本人, 挪威人) in 所有順序\n if 英國人 is 紅 #2\n if 挪威人 is 第一間 #10\n if 隔壁(挪威人, 藍) #15\n for (咖啡, 茶, 牛奶, 橘子汁, 水) in 所有順序\n if 咖啡 is 綠 #4\n if 烏克蘭人 is 茶 #5\n if 牛奶 is 中間 #9\n for (OldGold, Kools, Chesterfields, LuckyStrike, Parliaments) in 所有順序\n if Kools is 黃 #8\n if LuckyStrike is 橘子汁 #13\n if 日本人 is Parliaments #14\n for (狗, 蝸牛, 狐狸, 馬, 斑馬) in 所有順序\n if 西班牙人 is 狗 #3\n if OldGold is 蝸牛 #7\n if 隔壁(Chesterfields, 狐狸) #11\n if 隔壁(Kools, 馬) #12\n ]\nzebra_puzzle()\n\ndef result(d): return {i:[k for k,v in d.items() if v == i] for i in 屋子}\ndef zebra_puzzle(): \n return [result(locals())\n for (紅, 綠, 白, 黃, 藍) in 所有順序\n if 在右邊(綠, 白)\n for (英國人, 西班牙人, 烏克蘭人, 日本人, 挪威人) in 所有順序\n if 英國人 is 紅\n if 挪威人 is 第一間\n if 隔壁(挪威人, 藍)\n for (咖啡, 茶, 牛奶, 橘子汁, 水) in 所有順序\n if 咖啡 is 綠\n if 烏克蘭人 is 茶\n if 牛奶 is 中間\n for (OldGold, Kools, Chesterfields, LuckyStrike, Parliaments) in 所有順序\n if Kools is 黃\n if LuckyStrike is 橘子汁\n if 日本人 is Parliaments\n for (狗, 蝸牛, 狐狸, 馬, 斑馬) in 所有順序\n if 西班牙人 is 狗\n if OldGold is 蝸牛\n if 隔壁(Chesterfields, 狐狸)\n if 隔壁(Kools, 馬) ]\nzebra_puzzle()[0]", "Credit:\n基於 Udacity's CS212 的解答" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
sergey-tomin/workshop
1_introduction.ipynb
mit
[ "This notebook was created by Sergey Tomin for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016.\nAn Introduction to Ocelot\nOcelot is a multiphysics simulation toolkit designed for studying FEL and storage ring based light sources. Ocelot is written in Python. Its central concept is the writing of python's scripts for simulations with the usage of Ocelot's modules and functions and the standard Python libraries. \nOcelot includes following main modules:\n* Charge particle beam dynamics module (CPBD)\n - optics\n - tracking\n - matching\n - collective effects\n - Space Charge (true 3D Laplace solver) \n - CSR (Coherent Synchrotron Radiation) (1D model with arbitrary number of dipole) (under development).\n - Wakefields (Taylor expansion up to second order for arbitrary geometry).\n - MOGA (Multi Objective Genetics Algorithm). (under development but we have already applyed it for a storage ring aplication)\n* Native module for spontaneous radiation calculation\n* FEL calculations: interface to GENESIS and pre/post-processing\n* Modules for online beam control and online optimization of accelerator performances. Work1, work2, work3.\nOcelot extensively uses Python's NumPy (Numerical Python) and SciPy (Scientific Python) libraries, which enable efficient in-core numerical and scientific computation within Python and give you access to various mathematical and optimization techniques and algorithms. To produce high quality figures Python's matplotlib library is used.\nIt is an open source project and it is being developed by physicists from The European XFEL, DESY (Germany), NRC Kurchatov Institute (Russia).\nWe still have no documentation but you can find a lot of examples in ocelot/demos/ \nOcelot user profile\nOcelot is designed for researchers who want to have the flexibility that is given by high-level languages such as Matlab, Python (with Numpy and SciPy) or Mathematica.\nHowever if someone needs a GUI it can be developed using Python's libraries like a PyQtGraph or PyQt. \nFor example, you can see GUI for SASE optimization (uncomment and run next block)", "from IPython.display import Image\n#Image(filename='gui_example.png') ", "Outline\n\nPreliminaries: Setup & introduction\n\nBeam dynamics\n\nTutorial N1. Linear optics.. Web version.\nLinear optics. DBA.\n\n\nTutorial N2. Tracking.. Web version.\nLinear optics of the European XFEL Injector\nTracking. First and second order. \n\n\nTutorial N3. Space Charge.. Web version.\nTracking with SC effects.\n\n\nTutorial N4. Wakefields.. Web version.\nTracking with Wakefields\n\n\n\nFEL calculation\n\nTutorial N5: Genesis preprocessor. Web version.\nTutorial N6. Genesis postprocessor. Web version.\n\nAll IPython (jupyter) notebooks (.ipynb) have analogues in the form of python scripts (.py). \nAll these notebooks as well as additional files (beam distribution, wakes, ...) you can download here.\nPreliminaries\nThe tutorial includes 4 simple examples dediacted to beam dynamics. However, you should have a basic understanding of Computer Programming terminologies. A basic understanding of Python language is a plus.\nThis tutorial requires the following packages:\n\nPython version 2.7 or 3.4-3.5\nnumpy version 1.8 or later: http://www.numpy.org/\nscipy version 0.15 or later: http://www.scipy.org/\nmatplotlib version 1.5 or later: http://matplotlib.org/\nipython version 2.4 or later, with notebook support: http://ipython.org\n\nThe easiest way to get these is to download and install the (very large) Anaconda software distribution.\nAlternatively, you can download and install miniconda.\nThe following command will install all required packages:\n$ conda install numpy scipy matplotlib ipython-notebook\nOcelot installation\n\nyou have to download from GitHub zip file. \nUnzip ocelot-master.zip to your working folder ../your_working_dir/. \nRename folder ../your_working_dir/ocelot-master to ../your_working_dir/ocelot. \n\nAdd ../your_working_dir/ to PYTHONPATH\n\nWindows 7: go to Control Panel -> System and Security -> System -> Advance System Settings -> Environment Variables.\nand in User variables add ../your_working_dir/ to PYTHONPATH. If variable PYTHONPATH does not exist, create it\n\nVariable name: PYTHONPATH\nVariable value: ../your_working_dir/\n- Linux: \n$ export PYTHONPATH=**../your_working_dir/**:$PYTHONPATH\n\n\nTo launch \"ipython notebook\" or \"jupyter notebook\"\nin command line run following commands:\n$ ipython notebook\nor\n$ ipython notebook --notebook-dir=\"path_to_your_directory\"\nor\n$ jupyter notebook --notebook-dir=\"path_to_your_directory\"\nChecking your installation\nYou can run the following code to check the versions of the packages on your system:\n(in IPython notebook, press shift and return together to execute the contents of a cell)", "import IPython\nprint('IPython:', IPython.__version__)\n\nimport numpy\nprint('numpy:', numpy.__version__)\n\nimport scipy\nprint('scipy:', scipy.__version__)\n\nimport matplotlib\nprint('matplotlib:', matplotlib.__version__)\n\nimport ocelot\nprint('ocelot:', ocelot.__version__)", "<a id=\"tutorial1\"></a>\nTutorial N1. Double Bend Achromat.\nWe designed a simple lattice to demonstrate the basic concepts and syntax of the optics functions calculation. \nAlso, we chose DBA to demonstrate the periodic solution for the optical functions calculation.", "from __future__ import print_function\n\n# the output of plotting commands is displayed inline within frontends, \n# directly below the code cell that produced it\n%matplotlib inline\n\n# import from Ocelot main modules and functions\nfrom ocelot import *\n\n# import from Ocelot graphical modules\nfrom ocelot.gui.accelerator import *", "Creating lattice\nOcelot has following elements: Drift, Quadrupole, Sextupole, Octupole, Bend, SBend, RBend, Edge, Multipole, Hcor, Vcor, Solenoid, Cavity, Monitor, Marker, Undulator.", "# defining of the drifts\nD1 = Drift(l=2.)\nD2 = Drift(l=0.6)\nD3 = Drift(l=0.3)\nD4 = Drift(l=0.7)\nD5 = Drift(l=0.9)\nD6 = Drift(l=0.2)\n\n# defining of the quads\nQ1 = Quadrupole(l=0.4, k1=-1.3)\nQ2 = Quadrupole(l=0.8, k1=1.4)\nQ3 = Quadrupole(l=0.4, k1=-1.7)\nQ4 = Quadrupole(l=0.5, k1=1.3)\n\n# defining of the bending magnet\nB = Bend(l=2.7, k1=-.06, angle=2*pi/16., e1=pi/16., e2=pi/16.)\n\n# defining of the sextupoles\nSF = Sextupole(l=0.01, k2=1.5) #random value\nSD = Sextupole(l=0.01, k2=-1.5) #random value\n\n# cell creating\ncell = (D1, Q1, D2, Q2, D3, Q3, D4, B, D5, SD, D5, SF, D6, Q4, D6, SF, D5, SD, D5, B, D4, Q3, D3, Q2, D2, Q1, D1)", "hint: to see a simple description of the function put cursor inside () and press Shift-Tab or you can type sign ? before function. To extend dialog window press +* *\nThe cell is a list of the simple objects which contain a physical information of lattice elements such as length, strength, voltage and so on. In order to create a transport map for every element and bind it with lattice object we have to create new Ocelot object - MagneticLattice() which makes these things automatically. \nMagneticLattice(sequence, start=None, stop=None, method=MethodTM()): \n* sequence - list of the elements,\nother paramenters we will consider in tutorial N2.", "lat = MagneticLattice(cell)\n\n# to see total lenth of the lattice \nprint(\"length of the cell: \", lat.totalLen, \"m\")", "Optical function calculation\nUses: \n* twiss() function and,\n* Twiss() object contains twiss parameters and other information at one certain position (s) of lattice\nTo calculate twiss parameters you have to run twiss(lattice, tws0=None, nPoints=None) function. If you want to get a periodic solution leave tws0 by default. \nYou can change the number of points over the cell, If nPoints=None, then twiss parameters are calculated at the end of each element.\ntwiss() function returns list of Twiss() objects.\nYou will see the Twiss object contains more information than just twiss parameters.", "tws=twiss(lat)\n\n# to see twiss paraments at the begining of the cell, uncomment next line\n# print(tws[0])\n\n# to see twiss paraments at the end of the cell, uncomment next line\nprint(tws[-1])\n\nlen(tws)\n\n# plot optical functions.\nplot_opt_func(lat, tws, top_plot = [\"Dx\", \"Dy\"], legend=False, font_size=10)\nplt.show()\n\n# you also can use standard matplotlib functions for plotting\n#s = [tw.s for tw in tws]\n#bx = [tw.beta_x for tw in tws]\n#plt.plot(s, bx)\n#plt.show()\n\n# you can play with quadrupole strength and try to make achromat\nQ4.k1 = 1.18\n\n# to make achromat uncomment next line\n# Q4.k1 = 1.18543769836\n# To use matching function, please see ocelot/demos/ebeam/dba.py \n\n# updating transfer maps after changing element parameters. \nlat.update_transfer_maps()\n\n# recalculate twiss parameters \ntws=twiss(lat, nPoints=1000)\n\nplot_opt_func(lat, tws, legend=False)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
TheOregonian/long-term-care-db
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
mit
[ "This is a dataset of Assisted Living, Nursing and Residential Care facilities in Oregon, open as of January, 2017. For each, we have:\n1. <i>facility_id:</i> Unique ID used to join to complaints\n2. <i>fac_ccmunumber:</i> Unique ID used to join to ownership history\n3. <i>facility_type:</i> NF - Nursing Facility; RCF - Residential Care Facility; ALF - Assisted Living Facility\n4. <i>fac_capacity:</i> Number of beds facility is licensed to have. Not necessarily the number of beds facility does have.\n5. <i>offline:</i> created in munging notebook, a count of complaints that DO NOT appear when facility is searched on state's complaint search website (https://apps.state.or.us/cf2/spd/facility_complaints/).\n6. <i>online:</i> created in munging notebook, a count of complaints that DO appear when facility is searched on state's complaint search website (https://apps.state.or.us/cf2/spd/facility_complaints/).", "import pandas as pd\nimport numpy as np\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\n\ndf = pd.read_csv('/Users/fzarkhin/OneDrive - Advance Central Services, Inc/fproj/github/database-story/data/processed/facilities.csv')", "<h3>How many facilities have accurate records online?</h3>\n\nThose that have no offline records.", "df[(df['offline'].isnull())].count()[0]", "<h3>How many facilities have inaccurate records online?<h/3>\n\nThose that have offline records.", "df[(df['offline'].notnull())].count()[0]", "<h3>How many facilities had more than double the number of complaints shown online?</h3>", "df[(df['offline']>df['online']) & (df['online'].notnull())].count()[0]", "<h3>How many facilities show zero complaints online but have complaints offline?</h3>", "df[(df['online'].isnull()) & (df['offline'].notnull())].count()[0]", "<h3>How many facilities have complaints and are accurate online?</h3>", "df[(df['online'].notnull()) & (df['offline'].isnull())].count()[0]", "<h3>How many facilities have complaints?</h3>", "df[(df['online'].notnull()) | df['offline'].notnull()].count()[0]", "<h3>What percent of facilities have accurate records online?</h3>", "df[(df['offline'].isnull())].count()[0]/df.count()[0]*100", "<h3>What is the total capacity of all facilities with inaccurate records?</h3>", "df[df['offline'].notnull()].sum()['fac_capacity']\n\ndf[df['fac_capacity'].isnull()]\n\n#df#['fac_capacity'].sum()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_artifacts_correction_ssp.ipynb
bsd-3-clause
[ "%matplotlib inline", "Artifact Correction with SSP", "import numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.preprocessing import compute_proj_ecg, compute_proj_eog\n\n# getting some data ready\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.set_eeg_reference()\nraw.pick_types(meg=True, ecg=True, eog=True, stim=True)", "Compute SSP projections", "projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, average=True)\nprint(projs)\n\necg_projs = projs[-2:]\nmne.viz.plot_projs_topomap(ecg_projs)\n\n# Now for EOG\n\nprojs, events = compute_proj_eog(raw, n_grad=1, n_mag=1, average=True)\nprint(projs)\n\neog_projs = projs[-2:]\nmne.viz.plot_projs_topomap(eog_projs)", "Apply SSP projections\nMNE is handling projections at the level of the info,\nso to register them populate the list that you find in the 'proj' field", "raw.info['projs'] += eog_projs + ecg_projs", "Yes this was it. Now MNE will apply the projs on demand at any later stage,\nso watch out for proj parmeters in functions or to it explicitly\nwith the .apply_proj method\nDemonstrate SSP cleaning on some evoked data", "events = mne.find_events(raw, stim_channel='STI 014')\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n# this can be highly data dependent\nevent_id = {'auditory/left': 1}\n\nepochs_no_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,\n proj=False, baseline=(None, 0), reject=reject)\nepochs_no_proj.average().plot(spatial_colors=True)\n\n\nepochs_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=True,\n baseline=(None, 0), reject=reject)\nepochs_proj.average().plot(spatial_colors=True)", "Looks cool right? It is however often not clear how many components you\nshould take and unfortunately this can have bad consequences as can be seen\ninteractively using the delayed SSP mode:", "evoked = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5,\n proj='delayed', baseline=(None, 0),\n reject=reject).average()\n\n# set time instants in seconds (from 50 to 150ms in a step of 10ms)\ntimes = np.arange(0.05, 0.15, 0.01)\n\nevoked.plot_topomap(times, proj='interactive')", "now you should see checkboxes. Remove a few SSP and see how the auditory\npattern suddenly drops off" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
krosaen/ml-study
python-ml-book/ch12/ch12.ipynb
mit
[ "Chapter 12: Training Artificial Neural Networks for Image Recognition\nIn this notebook I work through chapter 12 of Python Machine Learning—see the author's definitive notes.\nLoading in the MNIST hand written image data set", "import os\nimport struct\nimport numpy as np\n\ndef load_mnist(path, kind='train'):\n \"\"\"Load MNIST data from `path`\"\"\"\n labels_path = os.path.join(path, \n '%s-labels-idx1-ubyte' % kind)\n images_path = os.path.join(path, \n '%s-images-idx3-ubyte' % kind)\n \n with open(labels_path, 'rb') as lbpath:\n magic, n = struct.unpack('>II', \n lbpath.read(8))\n labels = np.fromfile(lbpath, \n dtype=np.uint8)\n\n with open(images_path, 'rb') as imgpath:\n magic, num, rows, cols = struct.unpack(\">IIII\", \n imgpath.read(16))\n images = np.fromfile(imgpath, \n dtype=np.uint8).reshape(len(labels), 784)\n \n return images, labels\n\nX_train, y_train = load_mnist('mnist', kind='train')\nprint('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))\n\nX_test, y_test = load_mnist('mnist', kind='t10k')\nprint('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\n\nfig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True,)\nax = ax.flatten()\nfor i in range(10):\n img = X_train[y_train == i][0].reshape(28, 28)\n ax[i].imshow(img, cmap='Greys', interpolation='nearest')\n\nax[0].set_xticks([])\nax[0].set_yticks([])\nplt.tight_layout()\nplt.show()", "Show a bunch of 4s", "fig, ax = plt.subplots(nrows=5, ncols=5, sharex=True, sharey=True,)\nax = ax.flatten()\nfor i in range(25):\n img = X_train[y_train == 4][i].reshape(28, 28)\n ax[i].imshow(img, cmap='Greys', interpolation='nearest')\n\nax[0].set_xticks([])\nax[0].set_yticks([])\nplt.tight_layout()\nplt.show()", "Classifying with tree based models\nLet's see how well some other models do before we get to the neural net.", "from sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\n\ntree10 = DecisionTreeClassifier(criterion='entropy', max_depth=10, random_state=0)\ntree100 = DecisionTreeClassifier(criterion='entropy', max_depth=100, random_state=0)\n\nrf10 = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1)\nrf100 = RandomForestClassifier(criterion='entropy', n_estimators=100, random_state=1)\n\nlabeled_models = [\n ('decision tree depth 10', tree10),\n ('decision tree depth 100', tree100),\n ('random forest 10 estimators', rf10),\n ('random forest 100 estimators', rf100),\n]\n\n\nimport time\nimport subprocess\n\ndef say_done(label):\n subprocess.call(\"say 'done with {}'\".format(label), shell=True)\n\nfor label, model in labeled_models:\n before = time.time()\n model.fit(X_train, y_train)\n after = time.time()\n\n print(\"{} fit the dataset in {:.1f} seconds\".format(label, after - before))\n say_done(label)\n\nfrom sklearn.metrics import accuracy_score\n\nfor label, model in labeled_models:\n print(\"{} training fit: {:.3f}\".format(label, accuracy_score(y_train, model.predict(X_train)))) \n print(\"{} test accuracy: {:.3f}\".format(label, accuracy_score(y_test, model.predict(X_test)))) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
gloriakang/vax-sentiment
to_do/vax_temp/test.ipynb
mit
[ "conversion, drawing, saving, analysis\n\ncopy of dan's thing\nconverts .csv to .gml and .net\ndraws graph, saves graph.png\ntry to combine into this", "import pandas as pd\nimport numpy as np\nimport networkx as nx\nfrom copy import deepcopy\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom matplotlib.backends.backend_pdf import PdfPages\n\nfrom glob import glob\nfileName = 'article0'\n\ndef getFiles(fileName):\n matches = glob('*'+fileName+'*')\n bigFile = matches[0]\n data = pd.DataFrame.from_csv(bigFile)\n return clearSource(data)\n \n\ndef clearSource(data):\n columns = ['source','target']\n pre = len(data)\n for column in columns:\n data = data[pd.notnull(data[column])]\n post = len(data)\n print \"Filtered %s rows to %s rows by removing rows with blank values in columns %s\" % (pre,post,columns)\n return data\n \n \n#data = getFiles(fileName)\n\ndef getStuff(data,labels):\n forEdges = labels == ['edge']\n columns = list(data.columns.values)\n items = dict()\n \n nameFunc = {True: lambda x,y: '%s - %s - %s' % (x['source'],x['edge'],x['target']),\n False: lambda x,y: x[y]}[forEdges]\n \n extra = ['source','target'] * forEdges\n \n for label in labels:\n relevant = [col for col in columns if label+'-' in col] + extra\n #relevant = extra\n print \"Extracting %s data from %s\" % (label,relevant)\n for i in data.index:\n row = data.ix[i]\n for col in relevant:\n if str(row[col]).lower() != 'nan':\n name = nameFunc(row,label)\n if name not in items:\n items[name] = dict()\n items[name][col.replace(label+'-','')] = row[col]\n return items\n \n\ndef getNodes(data):\n return getStuff(data,['source','target'])\n\n\ndef getEdges(data):\n return getStuff(data,['edge'])\n \n \n#allNodes = getNodes(data); allEdges = getEdges(data)\n\ndef addNodes(graph,nodes):\n for key,value in nodes.iteritems():\n graph.add_node(key,attr_dict=value)\n return graph\n \ndef addEdges(graph,edges):\n for key,value in edges.iteritems():\n value['label'] = key\n value['edge'] = key.split(' - ')[1]\n graph.add_edge(value['source'],value['target'],attr_dict = value)\n return graph\n \n\n#########\n\ndef createNetwork(edges,nodes):\n graph = nx.MultiGraph()\n graph = addNodes(graph,nodes)\n graph = addEdges(graph,edges)\n return graph\n\n\n#fullGraph = createNetwork(allEdges,allNodes)\n\ndef drawIt(graph,what='graph', save_plot=None):\n style=nx.spring_layout(graph)\n size = graph.number_of_nodes()\n print \"Drawing %s of size %s:\" % (what,size)\n if size > 20:\n plt.figure(figsize=(10,10))\n if size > 40:\n nx.draw(graph,style,node_size=60,font_size=8)\n if save_plot is not None:\n print('saving: {}'.format(save_plot))\n plt.savefig(save_plot)\n else:\n nx.draw(graph,style)\n if save_plot is not None:\n print('saving: {}'.format(save_plot))\n plt.savefig(save_plot)\n else:\n nx.draw(graph,style)\n if save_plot is not None:\n print('saving: {}'.format(save_plot))\n plt.savefig(save_plot)\n plt.show()\n \n \ndef describeGraph(graph, save_plot=None):\n components = nx.connected_components(graph)\n components = list(components)\n isolated = [entry[0] for entry in components if len(entry)==1]\n params = (graph.number_of_edges(),graph.number_of_nodes(),len(components),len(isolated))\n print \"Graph has %s nodes, %s edges, %s connected components, and %s isolated nodes\\n\" % params\n drawIt(graph, save_plot=save_plot)\n for idx, sub in enumerate(components):\n drawIt(graph.subgraph(sub),what='component', save_plot='{}-{}.png'.format('component', idx))\n print \"Isolated nodes:\", isolated\n\ndef getGraph(fileRef, save_plot=None):\n data = getFiles(fileName)\n nodes = getNodes(data)\n edges = getEdges(data)\n graph = createNetwork(edges,nodes)\n fileOut = fileRef.split('.')[0]+'.gml'\n print \"Writing GML file to %s\" % fileOut\n nx.write_gml(graph, fileOut)\n \n fileOutNet = fileRef.split('.')[0]+'.net'\n print \"Writing net file to %s\" % fileOutNet\n nx.write_pajek(graph, fileOutNet)\n \n describeGraph(graph, save_plot)\n return graph, nodes, edges\n\nfileName = 'data/csv/article1'\ngraph, nodes, edges = getGraph(fileName, save_plot='graph.png')\n\nplt.figure(figsize=(12, 12))\nnx.draw_spring(graph, node_color='g', with_labels=True, arrows=True)\nplt.show()\n\n# return a dictionary of centrality values for each node\nnx.degree_centrality(graph)", "degree centrality\nfor a node v is the fraction of nodes it is connected to", "# the type of degree centrality is a dictionary\ntype(nx.degree_centrality(graph))\n\n# get all the values of the dictionary, this returns a list of centrality scores\n# turn the list into a numpy array\n# take the mean of the numpy array\nnp.array(nx.degree_centrality(graph).values()).mean()", "closeness centrality\nof a node u is the reciprocal of the sum of the shortest path distances from u to all n-1 other nodes. Since the sum of distances depends on the number of nodes in the graph, closeness is normalized by the sum of minimum possible distances n-1. Notice that higher values of closeness indicate higher centrality.", "nx.closeness_centrality(graph)", "betweenness centrality\nof a node v is the sum of the fraction of all-pairs shortest paths that pass through v", "nx.betweenness_centrality(graph)\nnp.array(nx.betweenness_centrality(graph).values()).mean()", "degree assortativity coefficient\nAssortativity measures the similarity of connections in the graph with respect to the node degree.", "nx.degree_assortativity_coefficient(graph)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ozak/geopandas
examples/overlays.ipynb
bsd-3-clause
[ "Spatial overlays allow you to compare two GeoDataFrames containing polygon or multipolygon geometries \nand create a new GeoDataFrame with the new geometries representing the spatial combination and\nmerged properties. This allows you to answer questions like\n\nWhat are the demographics of the census tracts within 1000 ft of the highway?\n\nThe basic idea is demonstrated by the graphic below but keep in mind that overlays operate at the dataframe level, \nnot on individual geometries, and the properties from both are retained", "from IPython.core.display import Image\nImage(url=\"http://docs.qgis.org/testing/en/_images/overlay_operations.png\")", "Now we can load up two GeoDataFrames containing (multi)polygon geometries...", "%matplotlib inline\nfrom shapely.geometry import Point\nfrom geopandas import datasets, GeoDataFrame, read_file\nfrom geopandas.tools import overlay\n\n# NYC Boros\nzippath = datasets.get_path('nybb')\npolydf = read_file(zippath)\n\n# Generate some circles\nb = [int(x) for x in polydf.total_bounds]\nN = 10\npolydf2 = GeoDataFrame([\n {'geometry': Point(x, y).buffer(10000), 'value1': x + y, 'value2': x - y}\n for x, y in zip(range(b[0], b[2], int((b[2] - b[0]) / N)),\n range(b[1], b[3], int((b[3] - b[1]) / N)))])", "The first dataframe contains multipolygons of the NYC boros", "polydf.plot()", "And the second GeoDataFrame is a sequentially generated set of circles in the same geographic space. We'll plot these with a different color palette.", "polydf2.plot(cmap='tab20b')", "The geopandas.tools.overlay function takes three arguments:\n\ndf1\ndf2\nhow\n\nWhere how can be one of:\n['intersection',\n'union',\n'identity',\n'symmetric_difference',\n'difference']\n\nSo let's identify the areas (and attributes) where both dataframes intersect using the overlay tool.", "from geopandas.tools import overlay\nnewdf = overlay(polydf, polydf2, how=\"intersection\")\nnewdf.plot(cmap='tab20b')", "And take a look at the attributes; we see that the attributes from both of the original GeoDataFrames are retained.", "polydf.head()\n\npolydf2.head()\n\nnewdf.head()", "Now let's look at the other how operations:", "newdf = overlay(polydf, polydf2, how=\"union\")\nnewdf.plot(cmap='tab20b')\n\nnewdf = overlay(polydf, polydf2, how=\"identity\")\nnewdf.plot(cmap='tab20b')\n\nnewdf = overlay(polydf, polydf2, how=\"symmetric_difference\")\nnewdf.plot(cmap='tab20b')\n\nnewdf = overlay(polydf, polydf2, how=\"difference\")\nnewdf.plot(cmap='tab20b')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tlkh/Generating-Inference-from-3D-Printing-Jobs
Linear Regression.ipynb
mit
[ "Linear Regression\nThis is a test to use the scikit-learn's LinearRegression to model the amount of filament used per minute of the cohort class Edison+ 3D Printer.\nImport Dependencies", "import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn import linear_model\nfrom sklearn.metrics import mean_squared_error, r2_score\n\nimport csv\n%run 'preprocessor.ipynb' #our own preprocessor functions", "Prepare Dataset", "with open('data_w1w4.csv', 'r') as f:\n reader = csv.reader(f)\n data = list(reader)\n \nmatrix = obtain_data_matrix(data)\nsamples = len(matrix)\nprint(\"Number of samples: \" + str(samples))\n\nY = matrix[:,[8]]\nX = matrix[:,[9]]\nS = matrix[:,[11]]", "Use the model (LinearRegression)", "# Create linear regression object\nregr = linear_model.LinearRegression()\n\n# Train the model using the training sets\nregr.fit(X, Y)\n\n# Make predictions using the testing set\nY_pred = regr.predict(X)", "Plot the data", "fig = plt.figure(1, figsize=(10, 4))\nplt.scatter([X], [Y], color='blue', edgecolor='k')\nplt.plot(X, Y_pred, color='red', linewidth=1)\n\nplt.xticks(())\nplt.yticks(())\n\nprint('Coefficients: ', regr.coef_)\n\nplt.show()\n\n# The mean squared error\nprint(\"Mean squared error: %.2f\"\n % mean_squared_error(Y, Y_pred))\n# Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % r2_score(Y, Y_pred))", "Bootstrap to find parameter confidence intervals", "from sklearn.utils import resample\n\nbootstrap_resamples = 5000\nintercepts = []\ncoefs = []\nfor k in range(bootstrap_resamples):\n #resample population with replacement\n samples_resampled = resample(X,Y,replace=True,n_samples=len(X))\n \n ## Fit model to resampled data\n # Create linear regression object\n regr = linear_model.LinearRegression()\n\n # Train the model using the training sets\n regr.fit(samples_resampled[0], samples_resampled[1])\n \n coefs.append(regr.coef_[0][0])\n intercepts.append(regr.intercept_[0])", "Calculate confidence interval", "alpha = 0.95\np_lower = ((1-alpha)/2.0) * 100\np_upper = (alpha + ((1-alpha)/2.0)) * 100\ncoefs_lower = np.percentile(coefs,p_lower)\ncoefs_upper = np.percentile(coefs,p_upper)\nintercepts_lower = np.percentile(intercepts,p_lower)\nintercepts_upper = np.percentile(intercepts,p_upper)\nprint('Coefs %.0f%% CI = %.5f - %.5f' % (alpha*100,coefs_lower,coefs_upper))\nprint('Intercepts %.0f%% CI = %.5f - %.5f' % (alpha*100,intercepts_lower,intercepts_upper))", "Visualize frequency distributions of bootstrapped parameters", "plt.hist(coefs)\nplt.xlabel('Coefficient X0')\nplt.title('Frquency Distribution of Coefficient X0')\nplt.show()\n\nplt.hist(intercepts)\nplt.xlabel('Intercept')\nplt.title('Frquency Distribution of Intercepts')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI: Vertex AI Migration: AutoML Image Classification\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nDataset\nThe dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.", "import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.getenv(\"IS_TESTING\"):\n ! pip3 install --upgrade tensorflow $USER_FLAG", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aip", "Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.", "aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)", "Location of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.", "IMPORT_FILE = (\n \"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv\"\n)", "Quick peek at your data\nThis tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.", "if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head", "Create a dataset\ndatasets.create-dataset-api\nCreate the Dataset\nNext, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:\n\ndisplay_name: The human readable name for the Dataset resource.\ngcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.\nimport_schema_uri: The data labeling schema for the data items.\n\nThis operation may take several minutes.", "dataset = aip.ImageDataset.create(\n display_name=\"Flowers\" + \"_\" + TIMESTAMP,\n gcs_source=[IMPORT_FILE],\n import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,\n)\n\nprint(dataset.resource_name)", "Example Output:\nINFO:google.cloud.aiplatform.datasets.dataset:Creating ImageDataset\nINFO:google.cloud.aiplatform.datasets.dataset:Create ImageDataset backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/1941426647739662336\nINFO:google.cloud.aiplatform.datasets.dataset:ImageDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592\nINFO:google.cloud.aiplatform.datasets.dataset:To use this ImageDataset in another session:\nINFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.ImageDataset('projects/759209241365/locations/us-central1/datasets/2940964905882222592')\nINFO:google.cloud.aiplatform.datasets.dataset:Importing ImageDataset data: projects/759209241365/locations/us-central1/datasets/2940964905882222592\nINFO:google.cloud.aiplatform.datasets.dataset:Import ImageDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/8100099138168815616\nINFO:google.cloud.aiplatform.datasets.dataset:ImageDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592\nprojects/759209241365/locations/us-central1/datasets/2940964905882222592\n\nTrain a model\ntraining.automl-api\nCreate and run training pipeline\nTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.\nCreate training pipeline\nAn AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the TrainingJob resource.\nprediction_type: The type task to train the model for.\nclassification: An image classification model.\nobject_detection: An image object detection model.\nmulti_label: If a classification task, whether single (False) or multi-labeled (True).\nmodel_type: The type of model for deployment.\nCLOUD: Deployment on Google Cloud\nCLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.\nCLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.\nMOBILE_TF_VERSATILE_1: Deployment on an edge device.\nMOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.\nMOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.\nbase_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.\n\nThe instantiated object is the DAG (directed acyclic graph) for the training job.", "dag = aip.AutoMLImageTrainingJob(\n display_name=\"flowers_\" + TIMESTAMP,\n prediction_type=\"classification\",\n multi_label=False,\n model_type=\"CLOUD\",\n base_model=None,\n)\n\nprint(dag)", "Example output:\n&lt;google.cloud.aiplatform.training_jobs.AutoMLImageTrainingJob object at 0x7f806a6116d0&gt;\n\nRun the training pipeline\nNext, you run the DAG to start the training job by invoking the method run, with the following parameters:\n\ndataset: The Dataset resource to train the model.\nmodel_display_name: The human readable name for the trained model.\ntraining_fraction_split: The percentage of the dataset to use for training.\ntest_fraction_split: The percentage of the dataset to use for test (holdout data).\nvalidation_fraction_split: The percentage of the dataset to use for validation.\nbudget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).\ndisable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.\n\nThe run method when completed returns the Model resource.\nThe execution of the training pipeline will take upto 20 minutes.", "model = dag.run(\n dataset=dataset,\n model_display_name=\"flowers_\" + TIMESTAMP,\n training_fraction_split=0.8,\n validation_fraction_split=0.1,\n test_fraction_split=0.1,\n budget_milli_node_hours=8000,\n disable_early_stopping=False,\n)", "Example output:\nINFO:google.cloud.aiplatform.training_jobs:View Training:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/training/2109316300865011712?project=759209241365\nINFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:\nPipelineState.PIPELINE_STATE_RUNNING\nINFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state:\nPipelineState.PIPELINE_STATE_RUNNING\n...\nINFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712\nINFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/1284590221056278528\n\nEvaluate the model\nprojects.locations.models.evaluations.list\nReview model evaluation scores\nAfter your model has finished training, you can review the evaluation scores for it.\nFirst, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.", "# Get model resource ID\nmodels = aip.Model.list(filter=\"display_name=flowers_\" + TIMESTAMP)\n\n# Get a reference to the Model Service client\nclient_options = {\"api_endpoint\": f\"{REGION}-aiplatform.googleapis.com\"}\nmodel_service_client = aip.gapic.ModelServiceClient(client_options=client_options)\n\nmodel_evaluations = model_service_client.list_model_evaluations(\n parent=models[0].resource_name\n)\nmodel_evaluation = list(model_evaluations)[0]\nprint(model_evaluation)", "Example output:\nname: \"projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824\"\nmetrics_schema_uri: \"gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml\"\nmetrics {\n struct_value {\n fields {\n key: \"auPrc\"\n value {\n number_value: 0.9891107\n }\n }\n fields {\n key: \"confidenceMetrics\"\n value {\n list_value {\n values {\n struct_value {\n fields {\n key: \"precision\"\n value {\n number_value: 0.2\n }\n }\n fields {\n key: \"recall\"\n value {\n number_value: 1.0\n }\n }\n }\n }\n\nMake batch predictions\npredictions.batch-prediction\nGet test item(s)\nNow do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.", "test_items = !gsutil cat $IMPORT_FILE | head -n2\nif len(str(test_items[0]).split(\",\")) == 3:\n _, test_item_1, test_label_1 = str(test_items[0]).split(\",\")\n _, test_item_2, test_label_2 = str(test_items[1]).split(\",\")\nelse:\n test_item_1, test_label_1 = str(test_items[0]).split(\",\")\n test_item_2, test_label_2 = str(test_items[1]).split(\",\")\n\nprint(test_item_1, test_label_1)\nprint(test_item_2, test_label_2)", "Copy test item(s)\nFor the batch prediction, copy the test items over to your Cloud Storage bucket.", "file_1 = test_item_1.split(\"/\")[-1]\nfile_2 = test_item_2.split(\"/\")[-1]\n\n! gsutil cp $test_item_1 $BUCKET_NAME/$file_1\n! gsutil cp $test_item_2 $BUCKET_NAME/$file_2\n\ntest_item_1 = BUCKET_NAME + \"/\" + file_1\ntest_item_2 = BUCKET_NAME + \"/\" + file_2", "Make the batch input file\nNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:\n\ncontent: The Cloud Storage path to the image.\nmime_type: The content type. In our example, it is a jpeg file.\n\nFor example:\n {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}", "import json\n\nimport tensorflow as tf\n\ngcs_input_uri = BUCKET_NAME + \"/test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n data = {\"content\": test_item_1, \"mime_type\": \"image/jpeg\"}\n f.write(json.dumps(data) + \"\\n\")\n data = {\"content\": test_item_2, \"mime_type\": \"image/jpeg\"}\n f.write(json.dumps(data) + \"\\n\")\n\nprint(gcs_input_uri)\n! gsutil cat $gcs_input_uri", "Make the batch prediction request\nNow that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:\n\njob_display_name: The human readable name for the batch prediction job.\ngcs_source: A list of one or more batch request input files.\ngcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.\nsync: If set to True, the call will block while waiting for the asynchronous batch job to complete.", "batch_predict_job = model.batch_predict(\n job_display_name=\"flowers_\" + TIMESTAMP,\n gcs_source=gcs_input_uri,\n gcs_destination_prefix=BUCKET_NAME,\n sync=False,\n)\n\nprint(batch_predict_job)", "Example output:\nINFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob\n&lt;google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0&gt; is waiting for upstream dependencies to complete.\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296\nINFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:\nINFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')\nINFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:\nJobState.JOB_STATE_RUNNING\n\nWait for completion of batch prediction job\nNext, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.", "batch_predict_job.wait()", "Example Output:\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328\nINFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:\nINFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')\nINFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:\nhttps://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_RUNNING\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:\nJobState.JOB_STATE_SUCCEEDED\nINFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328\n\nGet the predictions\nNext, get the results from the completed batch prediction job.\nThe results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:\n\ncontent: The prediction request.\nprediction: The prediction response.\nids: The internal assigned unique identifiers for each prediction request.\ndisplayNames: The class names for each class label.\nconfidences: The predicted confidence, between 0 and 1, per class label.", "import json\n\nimport tensorflow as tf\n\nbp_iter_outputs = batch_predict_job.iter_outputs()\n\nprediction_results = list()\nfor blob in bp_iter_outputs:\n if blob.name.split(\"/\")[-1].startswith(\"prediction\"):\n prediction_results.append(blob.name)\n\ntags = list()\nfor prediction_result in prediction_results:\n gfile_name = f\"gs://{bp_iter_outputs.bucket.name}/{prediction_result}\"\n with tf.io.gfile.GFile(name=gfile_name, mode=\"r\") as gfile:\n for line in gfile.readlines():\n line = json.loads(line)\n print(line)\n break", "Example Output:\n{'instance': {'content': 'gs://andy-1234-221921aip-20210802180634/100080576_f52e8ee070_n.jpg', 'mimeType': 'image/jpeg'}, 'prediction': {'ids': ['3195476558944927744', '1636105187967893504', '7400712711002128384', '2789026692574740480', '5501319568158621696'], 'displayNames': ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'], 'confidences': [0.99998736, 8.222247e-06, 3.6782617e-06, 5.3231275e-07, 2.6960555e-07]}}\n\nMake online predictions\npredictions.deploy-model-api\nDeploy the model\nNext, deploy your model for online prediction. To deploy the model, you invoke the deploy method.", "endpoint = model.deploy()", "Example output:\nINFO:google.cloud.aiplatform.models:Creating Endpoint\nINFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352\nINFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472\nINFO:google.cloud.aiplatform.models:To use this Endpoint in another session:\nINFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')\nINFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472\nINFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480\nINFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472\n\npredictions.online-prediction-automl\nGet test item\nYou will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.", "test_item = !gsutil cat $IMPORT_FILE | head -n1\nif len(str(test_item[0]).split(\",\")) == 3:\n _, test_item, test_label = str(test_item[0]).split(\",\")\nelse:\n test_item, test_label = str(test_item[0]).split(\",\")\n\nprint(test_item, test_label)", "Make the prediction\nNow that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.\nRequest\nSince in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.\nThe format of each instance is:\n{ 'content': { 'b64': base64_encoded_bytes } }\n\nSince the predict() method can take multiple items (instances), send your single test item as a list of one test item.\nResponse\nThe response from the predict() call is a Python dictionary with the following entries:\n\nids: The internal assigned unique identifiers for each prediction request.\ndisplayNames: The class names for each class label.\nconfidences: The predicted confidence, between 0 and 1, per class label.\ndeployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.", "import base64\n\nimport tensorflow as tf\n\nwith tf.io.gfile.GFile(test_item, \"rb\") as f:\n content = f.read()\n\n# The format of each instance should conform to the deployed model's prediction input schema.\ninstances = [{\"content\": base64.b64encode(content).decode(\"utf-8\")}]\n\nprediction = endpoint.predict(instances=instances)\n\nprint(prediction)", "Example output:\nPrediction(predictions=[{'ids': ['3195476558944927744', '5501319568158621696', '1636105187967893504', '2789026692574740480', '7400712711002128384'], 'displayNames': ['daisy', 'tulips', 'dandelion', 'sunflowers', 'roses'], 'confidences': [0.999987364, 2.69604527e-07, 8.2222e-06, 5.32310196e-07, 3.6782335e-06]}], deployed_model_id='5949545378826158080', explanations=None)\n\nUndeploy the model\nWhen you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.", "endpoint.undeploy_all()", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline trainig job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom trainig job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.12.1/examples/notebooks/generated/statespace_arma_0.ipynb
bsd-3-clause
[ "Autoregressive Moving Average (ARMA): Sunspots data\nThis notebook replicates the existing ARMA notebook using the statsmodels.tsa.statespace.SARIMAX class rather than the statsmodels.tsa.ARMA class.", "%matplotlib inline\n\nimport numpy as np\nfrom scipy import stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport statsmodels.api as sm\n\nfrom statsmodels.graphics.api import qqplot", "Sunspots Data", "print(sm.datasets.sunspots.NOTE)\n\ndta = sm.datasets.sunspots.load_pandas().data\n\ndta.index = pd.Index(pd.date_range(\"1700\", end=\"2009\", freq=\"A-DEC\"))\ndel dta[\"YEAR\"]\n\ndta.plot(figsize=(12,4));\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)\n\narma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False)\nprint(arma_mod20.params)\n\narma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False)\n\nprint(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)\n\nprint(arma_mod30.params)\n\nprint(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)", "Does our model obey the theory?", "sm.stats.durbin_watson(arma_mod30.resid)\n\nfig = plt.figure(figsize=(12,4))\nax = fig.add_subplot(111)\nax = plt.plot(arma_mod30.resid)\n\nresid = arma_mod30.resid\n\nstats.normaltest(resid)\n\nfig = plt.figure(figsize=(12,4))\nax = fig.add_subplot(111)\nfig = qqplot(resid, line='q', ax=ax, fit=True)\n\nfig = plt.figure(figsize=(12,8))\nax1 = fig.add_subplot(211)\nfig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1)\nax2 = fig.add_subplot(212)\nfig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)\n\nr,q,p = sm.tsa.acf(resid, fft=True, qstat=True)\ndata = np.c_[range(1,41), r[1:], q, p]\ntable = pd.DataFrame(data, columns=['lag', \"AC\", \"Q\", \"Prob(>Q)\"])\nprint(table.set_index('lag'))", "This indicates a lack of fit.\n\n\nIn-sample dynamic prediction. How good does our model do?", "predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True)\n\nfig, ax = plt.subplots(figsize=(12, 8))\ndta.loc['1950':].plot(ax=ax)\npredict_sunspots.plot(ax=ax, style='r');\n\ndef mean_forecast_err(y, yhat):\n return y.sub(yhat).mean()\n\nmean_forecast_err(dta.SUNACTIVITY, predict_sunspots)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yuanotes/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
[ "Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10-python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 2\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.", "def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n return x / 255.0\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.", "def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n x = np.asarray(x)\n result = np.zeros((x.shape[0], 10))\n result[np.arange(x.shape[0]), x] = 1\n return result\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, shape=(None, ) + image_shape, name=\"x\")\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.uint8, shape=(None, n_classes), name=\"y\")\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, name=\"keep_prob\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n input_depth = x_tensor.get_shape().as_list()[-1]\n\n conv_strides = (1,) + conv_strides + (1, )\n pool_ksize = (1,) + pool_ksize + (1, )\n pool_strides = (1,) + pool_strides + (1, )\n\n \n weights = tf.Variable(tf.random_normal(list(conv_ksize) + [input_depth, conv_num_outputs]))\n bias = tf.Variable(tf.zeros([conv_num_outputs])) \n x = tf.nn.conv2d(x_tensor, weights, conv_strides, 'SAME')\n x = tf.nn.bias_add(x, bias)\n x = tf.nn.relu(x) \n x = tf.nn.max_pool(x, pool_ksize, pool_strides, 'SAME')\n \n return x\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "from tensorflow.contrib.layers.python import layers\n\ndef flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n return layers.flatten(x_tensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "from tensorflow.contrib.layers.python import layers\n\ndef fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n \n x = layers.fully_connected(x_tensor, num_outputs)\n return x\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n return layers.fully_connected(x_tensor, num_outputs, activation_fn=None)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n conv_ksize = (2, 2)\n conv_strides = (2, 2)\n pool_ksize = (2, 2)\n pool_strides = (2, 2) \n conv_output = 32\n x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides)\n# x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides)\n# x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides)\n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n x = flatten(x)\n \n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n x = fully_conn(x, 4096)\n# x = tf.nn.relu(x)\n x = tf.nn.dropout(x, keep_prob)\n\n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n num_outputs = 10\n x = output(x, num_outputs)\n return x\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n cost_val = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n accuracy_val = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})\n print('Cost: %f, Accuracy: %.2f%%' % (cost_val, accuracy_val * 100))", "Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout", "# TODO: Tune Parameters\nepochs = 10\nbatch_size = 256\nkeep_probability = 0.5", "Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ExaScience/smurff
docs/notebooks/different_methods.ipynb
mit
[ "Trying different Matrix Factorzation Methods\nIn this notebook we will try out several MF methods supported by SMURFF.\nDownloading the files\nAs in the previous example we download the ChemBL dataset. The resulting IC50 matrix is a compound x protein matrix, split into train and test. The ECFP matrix has features as side information on the compounds.", "import smurff\nimport logging\n\nlogging.basicConfig(level = logging.INFO)\n\nic50_train, ic50_test, ecfp = smurff.load_chembl()", "Matrix Factorization without Side Information (BPMF)\nAs a first example we can run SMURFF without side information. The method used here is BPMF.\nInput matrix for Y is a sparse scipy matrix (either coo_matrix, csr_matrix or csc_matrix). The test matrix\nYtest also needs to ne sparse matrix of the same size as Y. Here we have used burn-in of 20 samples for the Gibbs sampler and then collected 80 samples from the model. We use 16 latent dimensions in the model.\nFor good results you will need to run more sampling and burnin iterations (>= 1000) and maybe more latent dimensions.\nWe create a trainSession, and the run method returns the predictions of the Ytest matrix. predictions is a list of of type Prediction.", "\ntrainSession = smurff.BPMFSession(\n Ytrain = ic50_train,\n Ytest = ic50_test,\n num_latent = 16,\n burnin = 20,\n nsamples = 80,\n verbose = 0,)\n\npredictions = trainSession.run()\nprint(\"First prediction element: \", predictions[0])\n\nrmse = smurff.calc_rmse(predictions)\nprint(\"RMSE =\", rmse)", "Matrix Factorization with Side Information (Macau)\nIf we want to use the compound features we can use the Macau algorithm.\nThe parameter side_info = [ecfp, None] sets the side information for rows and columns, respectively. In this example we only use side information for the compounds (rows of the matrix).\nSince the ecfp sideinfo is sparse and large, we use the CG solver from Macau to reduce the memory footprint and speedup the computation.", "predictions = smurff.MacauSession(\n Ytrain = ic50_train,\n Ytest = ic50_test,\n side_info = [ecfp, None],\n direct = False, # use CG solver instead of Cholesky decomposition\n num_latent = 16,\n burnin = 40,\n nsamples = 100).run()\n\nsmurff.calc_rmse(predictions)", "Macau univariate sampler\nSMURFF also includes an option to use a very fast univariate sampler, i.e., instead of sampling blocks of variables jointly it samples each individually. An example:", "predictions = smurff.MacauSession(\n Ytrain = ic50_train,\n Ytest = ic50_test,\n side_info = [ecfp, None],\n direct = True,\n univariate = True,\n num_latent = 32,\n burnin = 500,\n nsamples = 3500,\n verbose = 0,).run()\nsmurff.calc_rmse(predictions)", "When using it we recommend using larger values for burnin and nsamples, because the univariate sampler mixes slower than the blocked sampler." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Startupsci/data-science-notebooks
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
mit
[ "Python Data Files\nFile operations using Python and libraries.\nCSV files in native Python\nPython provides a native module to perform CSV file operations.\nOfficial documentation: CSV module in Python 2.7\nList to CSV", "# For reading/writing CSV files\nimport csv\n# For listing system file folders\nfrom subprocess import check_output\n\n# Use with open to ensure file is closed when block ends\n# The wb flag opens file for writing\nwith open('data/fileops/vehicles.csv', 'wb') as csv_file:\n # Prepare csv writer\n wtr = csv.writer(csv_file, delimiter=',', quotechar='\"',\n quoting=csv.QUOTE_MINIMAL)\n # Write CSV header row\n wtr.writerow(['type', 'wheels', 'speed', 'weight', 'invented'])\n # Write CSV data rows\n wtr.writerow(['Scooter', 2, 150, 109.78, 1817])\n wtr.writerow(['Car', 4, 250, 1818.45, 1885]) \n wtr.writerow(['Plane', 10, 850, 270000, 1903])\n\n# Check file created\nprint(check_output([\"ls\", \"data/fileops\"]).decode(\"utf8\"))", "CSV to List", "# The rb flag opens file for reading\nwith open('data/fileops/vehicles.csv', 'rb') as csv_file:\n rdr = csv.reader(csv_file, delimiter=',', quotechar='\"')\n for row in rdr:\n print '\\t'.join(row)", "Dictionary to CSV", "# Dictionary data structures can be used to represent rows\ngame1_scores = {'Game':'Quarter', 'Team A': 45, 'Team B': 90}\ngame2_scores = {'Game':'Semi', 'Team A': 80, 'Team B': 32}\ngame3_scores = {'Game':'Final', 'Team A': 70, 'Team B': 68}\n\nheaders = ['Game', 'Team A', 'Team B']\n\n# Create CSV from dictionaries\nwith open('data/fileops/game-scores.csv', 'wb') as df:\n dict_wtr = csv.DictWriter(df, fieldnames=headers)\n dict_wtr.writeheader()\n dict_wtr.writerow(game1_scores)\n dict_wtr.writerow(game2_scores)\n dict_wtr.writerow(game3_scores)\n\nprint(check_output([\"ls\", \"data/fileops\"]).decode(\"utf8\")) ", "CSV to Dictionary", "# Read CSV into dictionary data structure\nwith open('data/fileops/game-scores.csv', 'rb') as df:\n dict_rdr = csv.DictReader(df)\n for row in dict_rdr:\n print('\\t'.join([row['Game'], row['Team A'], row['Team B']]))\n print('\\t'.join(row.keys()))", "Pandas for CSV file operations\nPandas goal is to become the most powerful and flexible open source data analysis / manipulation tool available in any language. Pandas includes file operations capabilities for CSV, among other formats.\nCSV operations in Pandas are much faster than in native Python.\nDataFrame to CSV", "import pandas as pd\n\n# Create a DataFrame\ndf = pd.DataFrame({\n 'Name' : ['Josh', 'Eli', 'Ram', 'Bil'],\n 'Sales' : [34.32, 12.1, 4.77, 31.63],\n 'Region' : ['North', 'South', 'West', 'East'],\n 'Product' : ['PC', 'Phone', 'SW', 'Cloud']})\ndf\n\n# DataFrame to CSV\ndf.to_csv('data/fileops/sales.csv', index=False)\n\nprint(check_output([\"ls\", \"data/fileops\"]).decode(\"utf8\"))", "CSV to DataFrame", "# CSV to DataFrame\ndf2 = pd.read_csv('data/fileops/sales.csv')\n\ndf2", "DataFrame to Excel", "# DataFrame to XLSX Excel file\ndf.to_excel('data/fileops/sales.xlsx', index=False)\n\nprint(check_output([\"ls\", \"data/fileops\"]).decode(\"utf8\"))", "Excel to DataFrame", "# Excel to DataFrame\ndf3 = pd.read_excel('data/fileops/sales.xlsx')\n\ndf3" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NYUDataBootcamp/Materials
Code/notebooks/bootcamp_pandas-summarize.ipynb
mit
[ "Pandas 5: Summarizing data\nAnother in a series of notebooks that describe Pandas' powerful data management tools. In this one we summarize our data in a variety of ways. Which is more interesting than it sounds. \nOutline: \n\nWEO government debt data. Something to work with. How does Argentina's government debt compare to the debt of other countries? How did it compare when it defaulted in 2001? \nDescribing numerical data. Descriptive statistics: numbers of non-missing values, mean, median, quantiles. \nDescribing catgorical data. The excellent value_counts method. \nGrouping data. An incredibly useful collection of tools based on grouping data based on a variable: men and woman, grads and undergrads, and so on. \n\nNote: requires internet access to run. \nThis Jupyter notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp. \n<a id=prelims></a>\nPreliminaries\nImport packages, etc.", "import sys # system module \nimport pandas as pd # data package\nimport matplotlib.pyplot as plt # graphics module \nimport datetime as dt # date and time module\nimport numpy as np # foundation for Pandas \n\n%matplotlib inline \n\n# check versions (overkill, but why not?)\nprint('Python version:', sys.version)\nprint('Pandas version: ', pd.__version__)\nprint('Today: ', dt.date.today())", "<a id=weo></a>\nWEO data on government debt\nWe use the IMF's data on government debt again, specifically its World Economic Outlook database, commonly referred to as the WEO. We focus on government debt expressed as a percentage of GDP, variable code GGXWDG_NGDP. \nThe central question here is how the debt of Argentina, which defaulted in 2001, compared to other countries. Was it a matter of too much debt or something else? \nLoad data\nFirst step: load the data and extract a single variable: government debt (code GGXWDG_NGDP) expressed as a percentage of GDP.", "url1 = \"http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/\"\nurl2 = \"WEOOct2016all.xls\"\nurl = url1 + url2 \nweo = pd.read_csv(url, sep='\\t', \n usecols=[1,2] + list(range(19,46)), \n thousands=',', \n na_values=['n/a', '--']) \nprint('Variable dtypes:\\n', weo.dtypes.head(6), sep='')", "Clean and shape\nSecond step: select the variable we want and generate the two dataframes.", "# select debt variable \nvariables = ['GGXWDG_NGDP']\ndb = weo[weo['WEO Subject Code'].isin(variables)]\n\n# drop variable code column (they're all the same) \ndb = db.drop('WEO Subject Code', axis=1)\n\n# set index to country code \ndb = db.set_index('ISO')\n\n# name columns \ndb.columns.name = 'Year'\n\n# transpose \ndbt = db.T\n\n# see what we have \ndbt.head()", "Example. Let's try a simple graph of the dataframe dbt. The goal is to put Argentina in perspective by plotting it along with many other countries.", "fig, ax = plt.subplots()\ndbt.plot(ax=ax, \n legend=False, color='blue', alpha=0.3, \n ylim=(0,150)\n )\nax.set_ylabel('Percent of GDP')\nax.set_xlabel('')\nax.set_title('Government debt', fontsize=14, loc='left')\ndbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)", "Exercise. \n\nWhat do you take away from this graph? \nWhat would you change to make it look better?\nTo make it mnore informative?\nTo put Argentina's debt in context? \n\nExercise. Do the same graph with Greece (GRC) as the country of interest. How does it differ? Why do you think that is? \n<a id=describe></a>\nDescribing numerical data\nLet's step back a minute. What we're trying to do is compare Argentina to other countries. What's the best way to do that? This isn't a question with an obvious best answer, but we can try some things, see how they look. One thing we could do is compare Argentina to the mean or median. Or to some other feature of the distribution. \nWe work up to this by looking first at some features of the distribution of government debt numbers across countries. Some of this we've seen, some is new. \nWhat's (not) there?\nLet's check out the data first. How many non-missing values do we have at each date? We can do that with the count method. The argument axis=1 says to do this by date, counting across columns (axis number 1).", "dbt.shape\n\n# count non-missing values \ndbt.count(axis=1).plot()", "Describing series\nLet's take the data for 2001 -- the year of Argentina's default -- and see what how Argentina compares. Was its debt high compare to other countries? \nwhich leads to more questions. How would we compare? Compare Argentina to the mean or median? Something else? \nLet's see how that works.", "# 2001 data \ndb01 = db['2001'] \n\ndb01['ARG']\n\ndb01.mean()\n\ndb01.median()\n\ndb01.describe()\n\ndb01.quantile(q=[0.25, 0.5, 0.75])", "Comment. If we add enough quantiles, we might as well plot the whole distribution. The easiest way to do this is with a histogram.", "fig, ax = plt.subplots()\ndb01.hist(bins=15, ax=ax, alpha=0.35)\nax.set_xlabel('Government Debt (Percent of GDP)')\nax.set_ylabel('Number of Countries')\n\nymin, ymax = ax.get_ylim()\nax.vlines(db01['ARG'], ymin, ymax, color='blue', lw=2) ", "Comment Compared to the whole sample of countries in 2001, it doesn't seem that Argentina had particularly high debt.\nDescribing dataframes\nWe can compute the same statistics for dataframes. Here we hve a choice: we can compute (say) the mean down rows (axis=0) or across columns (axis=1). If we use the dataframe dbt, computing the mean across countries (columns) calls for axis=1.", "# here we compute the mean across countries at every date\ndbt.mean(axis=1).head()\n\n# or we could do the median\ndbt.median(axis=1).head()\n\n# or a bunch of stats at once \n# NB: db not dbt (there's no axix argument here)\ndb.describe()\n\n# the other way \ndbt.describe()", "Example. Let's add the mean to our graph. We make it a dashed line with linestyle='dashed'.", "fig, ax = plt.subplots()\ndbt.plot(ax=ax, \n legend=False, color='blue', alpha=0.2, \n ylim=(0,200)\n )\ndbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)\nax.set_ylabel('Percent of GDP')\nax.set_xlabel('')\nax.set_title('Government debt', fontsize=14, loc='left')\ndbt.mean(axis=1).plot(ax=ax, color='black', linewidth=2, linestyle='dashed')", "Question. Do you think this looks better when the mean varies with time, or when we use a constant mean? Let's try it and see.", "dbar = dbt.mean().mean()\ndbar\n\nfig, ax = plt.subplots()\ndbt.plot(ax=ax, \n legend=False, color='blue', alpha=0.3, \n ylim=(0,150)\n )\ndbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)\nax.set_ylabel('Percent of GDP')\nax.set_xlabel('')\nax.set_title('Government debt', fontsize=14, loc='left') \nxmin, xmax = ax.get_xlim()\nax.hlines(dbar, xmin, xmax, linewidth=2, linestyle='dashed')", "Exercise. Which do we like better?\nExercise. Replace the (constant) mean with the (constant) median? Which do you prefer? \n<a id=value-counts></a>\nDescribing categorical data\nA categorical variable is one that takes on a small number of values. States take on one of fifty values. University students are either grad or undergrad. Students select majors and concentrations. \nWe're going to do two things with categorical data: \n\nIn this section, we count the number of observations in each category using the value_counts method. This is a series method, we apply it to one series/variable at a time. \nIn the next section, we go on to describe how other variables differ across catagories. How do students who major in finance differ from those who major in English? And so on. \n\nWe start with the combined MovieLens data we constructed in the previous notebook.", "url = 'http://pages.stern.nyu.edu/~dbackus/Data/mlcombined.csv'\nml = pd.read_csv(url, index_col=0,encoding = \"ISO-8859-1\")\nprint('Dimensions:', ml.shape)\n\n# fix up the dates\nml[\"timestamp\"] = pd.to_datetime(ml[\"timestamp\"], unit=\"s\")\nml.head(10)\n\n# which movies have the most ratings? \nml['title'].value_counts().head(10)\n\nml['title'].value_counts().head(10).plot.barh(alpha=0.5)\n\n# which people have rated the most movies?\nml['userId'].value_counts().head(10)", "<a id=groupby></a>\nGrouping data\nNext up: group data by some variable. As an example, how would we compute the average rating of each movie? If you think for a minute, you might think of these steps:\n\nGroup the data by movie: Put all the \"Pulp Fiction\" ratings in one bin, all the \"Shawshank\" ratings in another. We do that with the groupby method. \nCompute a statistic (the mean, for example) for each group. \n\nPandas has tools that make that relatively easy.", "# group \ng = ml[['title', 'rating']].groupby('title')\ntype(g)", "Now that we have a groupby object, what can we do with it?", "# the number in each category\ng.count().head(10)\n\n# what type of object have we created?\ntype(g.count())", "Comment. Note that the combination of groupby and count created a dataframe with\n\nIts index is the variable we grouped by. If we group by more than one, we get a multi-index.\nIts columns are the other variables. \n\nExercise. Take the code \npython\ncounts = ml.groupby(['title', 'movieId'])\nWithout running it, what is the index of counts? What are its columns?", "counts = ml.groupby(['title', 'movieId']).count()\n\ngm = g.mean()\ngm.head()\n\n# we can put them together \ngrouped = g.count()\ngrouped = grouped.rename(columns={'rating': 'Number'})\ngrouped['Mean'] = g.mean()\ngrouped.head(10)\n\ngrouped.plot.scatter(x='Number', y='Mean')", "Exercise. Compute the median and add it to the dataframe. \nResources\nThe Brandon Rhodes video covers most of this, too." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pastas/pasta
examples/notebooks/16_uncertainty.ipynb
mit
[ "Uncertainty quantification\nR.A. Collenteur, University of Graz, WIP (May-2021)\nIn this notebook it is shown how to compute the uncertainty of the model simulation using the built-in uncertainty quantification options of Pastas. \n\nConfidence interval of simulation\nPrediction interval of simulation\nConfidence interval of step response\nConfidence interval of block response\nConfidence interval of contribution\nCustom confidence intervals\n\nThe compute the confidence intervals, parameters sets are drawn from a multivariate normal distribution based on the jacobian matrix obtained during parameter optimization. This method to quantify uncertainties has some underlying assumptions on the model residuals (or noise) that should be checked. This notebook only deals with parameter uncertainties and not with model structure uncertainties.", "import pandas as pd\nimport pastas as ps\n\nimport matplotlib.pyplot as plt\n\nps.set_log_level(\"ERROR\")\nps.show_versions()", "Create a model\nWe first create a toy model to simulate the groundwater levels in southeastern Austria. We will use this model to illustrate how the different methods for uncertainty quantification can be used.", "gwl = pd.read_csv(\"data_wagna/head_wagna.csv\", index_col=0, parse_dates=True, \n squeeze=True, skiprows=2).loc[\"2006\":].iloc[0::10]\nevap = pd.read_csv(\"data_wagna/evap_wagna.csv\", index_col=0, parse_dates=True, \n squeeze=True, skiprows=2)\nprec = pd.read_csv(\"data_wagna/rain_wagna.csv\", index_col=0, parse_dates=True, \n squeeze=True, skiprows=2)\n\n# Model settings\ntmin = pd.Timestamp(\"2007-01-01\") # Needs warmup\ntmax = pd.Timestamp(\"2016-12-31\")\n\nml = ps.Model(gwl)\nsm = ps.RechargeModel(prec, evap, recharge=ps.rch.FlexModel(), \n rfunc=ps.Exponential, name=\"rch\")\nml.add_stressmodel(sm)\n\n# Add the ARMA(1,1) noise model and solve the Pastas model\nml.add_noisemodel(ps.ArmaModel())\nml.solve(tmin=tmin, tmax=tmax, noise=True)", "Diagnostic Checks\nBefore we perform the uncertainty quantification, we should check if the underlying statistical assumptions are met. We refer to the notebook on Diagnostic checking for more details on this.", "ml.plots.diagnostics();", "Confidence intervals\nAfter the model is calibrated, a fit attribute is added to the Pastas Model object (ml.fit). This object contains information about the optimizations (e.g., the jacobian matrix) and a number of methods that can be used to quantify uncertainties.", "ci = ml.fit.ci_simulation(alpha=0.05, n=1000)\nax = ml.plot(figsize=(10,3));\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color=\"lightgray\")\nax.legend([\"Observations\", \"Simulation\", \"95% Confidence interval\"], ncol=3, loc=2)", "Prediction interval", "ci = ml.fit.prediction_interval(n=1000)\nax = ml.plot(figsize=(10,3));\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color=\"lightgray\")\nax.legend([\"Observations\", \"Simulation\", \"95% Prediction interval\"], ncol=3, loc=2)", "Uncertainty of step response", "ci = ml.fit.ci_step_response(\"rch\")\nax = ml.plots.step_response(figsize=(6,2))\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color=\"lightgray\")\nax.legend([\"Simulation\", \"95% Prediction interval\"], ncol=3, loc=4)", "Uncertainty of block response", "ci = ml.fit.ci_block_response(\"rch\")\nax = ml.plots.block_response(figsize=(6,2))\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color=\"lightgray\")\nax.legend([\"Simulation\", \"95% Prediction interval\"], ncol=3, loc=1)", "Uncertainty of the contributions", "ci = ml.fit.ci_contribution(\"rch\")\nr = ml.get_contribution(\"rch\")\nax = r.plot(figsize=(10,3))\nax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color=\"lightgray\")\nax.legend([\"Simulation\", \"95% Prediction interval\"], ncol=3, loc=1)\nplt.tight_layout()", "Custom Confidence intervals\nIt is also possible to compute the confidence intervals manually, for example to estimate the uncertainty in the recharge or statistics (e.g., SGI, NSE). We can call ml.fit.get_parameter_sample to obtain random parameter samples from a multivariate normal distribution using the optimal parameters and the covariance matrix. Next, we use the parameter sets to obtain multiple simulations of 'something', here the recharge.", "params = ml.fit.get_parameter_sample(n=1000, name=\"rch\")\ndata = {}\n\n# Here we run the model n times with different parameter samples\nfor i, param in enumerate(params):\n data[i] = ml.stressmodels[\"rch\"].get_stress(p=param)\n\ndf = pd.DataFrame.from_dict(data, orient=\"columns\").loc[tmin:tmax].resample(\"A\").sum()\nci = df.quantile([0.025, .975], axis=1).transpose()\n\nr = ml.get_stress(\"rch\").resample(\"A\").sum()\nax = r.plot.bar(figsize=(10,2), width=0.5, yerr=[r-ci.iloc[:,0], ci.iloc[:,1]-r])\nax.set_xticklabels(labels=r.index.year, rotation=0, ha='center')\nax.set_ylabel(\"Recharge [mm a$^{-1}$]\")\nax.legend(ncol=3);", "Uncertainty of the NSE\nThe code pattern shown above can be used for many types of uncertainty analyses. Another example is provided below, where we compute the uncertainty of the Nash-Sutcliffe efficacy.", "params = ml.fit.get_parameter_sample(n=1000)\ndata = []\n\n# Here we run the model n times with different parameter samples\nfor i, param in enumerate(params):\n sim = ml.simulate(p=param)\n data.append(ps.stats.nse(obs=ml.observations(), sim=sim))\n\nfig, ax = plt.subplots(1,1, figsize=(4,3))\nplt.hist(data, bins=50, density=True)\nax.axvline(ml.stats.nse(), linestyle=\"--\", color=\"k\")\nax.set_xlabel(\"NSE [-]\")\nax.set_ylabel(\"frequency [-]\")\n\nfrom scipy.stats import norm\nimport numpy as np\n\nmu, std = norm.fit(data)\n\n# Plot the PDF.\nxmin, xmax = ax.set_xlim()\nx = np.linspace(xmin, xmax, 100)\np = norm.pdf(x, mu, std)\nax.plot(x, p, 'k', linewidth=2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
aemerick/galaxy_analysis
physics_data/UVB/grackle_tables/photo.ipynb
mit
[ "Author: Britton Smith\nModified by : Andrew Emerick", "import h5py\nfrom matplotlib import pyplot\n%matplotlib inline\nimport numpy as np\n\nLW_model = 'Qin2020'\n\npyplot.rcParams['figure.figsize'] = (10, 6)\npyplot.rcParams['font.size'] = 14\n\nfrom make_table import k31_JW2012, k31_RFT14, k31_Qin2020\n\ndef interpvals(xtab, ytab, x, log=True):\n i = np.digitize(x, xtab) - 1\n i = np.clip(i, 0, xtab.size-2)\n if log:\n m = np.log10(ytab[i+1] / ytab[i]) / np.log10(xtab[i+1] / xtab[i])\n return np.power(10, m * np.log10(x / xtab[i]) + np.log10(ytab[i]))\n else:\n m = (ytab[i+1] - ytab[i]) / (xtab[i+1] - xtab[i])\n return m * (x - xtab[i]) + ytab[i]\n\ndef load_rates(filename, group, rates, zero_val=1e-50):\n print (\"Loading rates from %s: %s\" % (filename, rates))\n data = {}\n fh = h5py.File(filename, 'r')\n g = fh['UVBRates']\n data['z'] = g['z'].value\n for rate in rates:\n data[rate] = g[group][rate].value\n data[rate] = np.clip(data[rate], 1e-50, np.inf)\n fh.close()\n return data\n\ndef plot_rates(z, filenames, group, rates):\n pyplot.xscale('log')\n pyplot.yscale('log')\n lss = ['-', '--', ':']\n cmap = pyplot.cm.jet\n tdata = dict((filename, load_rates(filename, group, rates))\n for filename in filenames)\n for ir, rate in enumerate(rates):\n for ifn, fn in enumerate(filenames):\n ztab = tdata[fn]['z']\n my_rate = interpvals(ztab+1, tdata[fn][rate], z+1)\n if ifn == 0:\n label = rate\n else:\n label = None\n pyplot.plot(z+1, my_rate, linestyle=lss[ifn],\n color=cmap((ir+1)/len(rates)),\n label=label)\n pyplot.xlim(z[0]+1, z[-1]+1)\n pyplot.xlabel('z+1')\n pyplot.ylabel('rates [CGS]')\n pyplot.legend(loc='best')", "Regular tables\nSolid lines are original tables, dashed lines are the new tables.", "z = np.linspace(0, 50, 100)\nplot_rates(z, ['CloudyData_UVB=HM2012.h5',\n 'CloudyData_HM2012_highz.h5'],\n 'Chemistry', ['k24', 'k25', 'k26', 'k29', 'k30'])\npyplot.ylim(1e-29, 1e-11)\n\nz = np.linspace(0, 50, 100)\nplot_rates(z, ['CloudyData_UVB=HM2012.h5',\n 'CloudyData_HM2012_highz.h5'],\n 'Photoheating', ['piHI', 'piHeI', 'piHeII'])\npyplot.ylim(1e-26, 1e-11)\n\nz = np.linspace(0, 50, 100)\n\n\npyplot.plot(z, k31_RFT14(z), color='red', label='k31 JHW')\npyplot.plot(z, k31_JW2012(z), color='red', ls = ':', label='k31 JW2012')\npyplot.plot(z, k31_Qin2020(z), color='red', ls = '-.', label='k31 Qin2020')\n\n\n\nplot_rates(z, ['CloudyData_UVB=HM2012.h5',\n 'CloudyData_HM2012_highz.h5'],\n 'Chemistry', ['k27', 'k28', 'k31'])\npyplot.ylim(1e-19, 1e-7)", "Self-shielded tables\nSolid lines are original tables, dashed lines are the new tables.", "z = np.linspace(0, 50, 100)\nplot_rates(z, ['CloudyData_UVB=HM2012_shielded.h5',\n 'CloudyData_HM2012_highz_shielded.h5'],\n 'Chemistry', ['k24', 'k25', 'k26', 'k29', 'k30'])\npyplot.ylim(1e-29, 1e-11)\n\nz = np.linspace(0, 50, 100)\nplot_rates(z, ['CloudyData_UVB=HM2012_shielded.h5',\n 'CloudyData_HM2012_highz_shielded.h5'],\n 'Photoheating', ['piHI', 'piHeI', 'piHeII'])\npyplot.ylim(1e-26, 1e-11)\n\nz = np.linspace(0, 50, 100)\nplot_rates(z, ['CloudyData_UVB=HM2012_shielded.h5',\n 'CloudyData_HM2012_highz_shielded.h5'],\n 'Chemistry', ['k27', 'k28', 'k31'])\npyplot.ylim(1e-19, 1e-7)\n\n\npyplot.plot(z, k31_RFT14(z), color='red', label='k31 JHW')\npyplot.plot(z, k31_JW2012(z), color='red', ls = ':', label='k31 JW2012')\npyplot.plot(z, k31_Qin2020(z), color='red', ls = '-.', label='k31 Qin2020')\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
shngli/Data-Mining-Python
Mining massive datasets/MapReduce SVM.ipynb
gpl-3.0
[ "MapReduce / SVM\nQuestion 1\nSuppose our input data to a map-reduce operation consists of integer values (the keys are not important). The map function takes an integer i and produces the list of pairs (p,i) such that p is a prime divisor of i. For example, map(12) = [(2,12), (3,12)]. The reduce function is addition. That is, reduce(p, [i1, i2, ..., ik]) is (p, i1 + i2 +...+ ik). Compute the output, if the input is the set of integers 15, 21, 24, 30, 49.", "from collections import defaultdict\nimport math\n\n# determine if an integer n is a prime number\ndef isPrime(n):\n if n == 2:\n return True\n if n%2 == 0 or n <= 1:\n return False\n sqr = int(math.sqrt(n)) + 1\n for divisor in range(3, sqr, 2):\n if n%divisor == 0:\n return False\n return True\n\n# Output the prime divisors of each integers\nreduce = defaultdict(list)\ndef map(integer):\n output = []\n for i in range(2, integer):\n if isPrime(i) and integer%i == 0:\n output.append(i)\n return output\n\n# Input list of integers\ninteger = [15, 21, 24, 30, 49]\n\n# Print every integer and its prime divisor(s)\n# eg. The prime divisors of 15 are 3 & 5\nfor n in integer:\n print \"Integer:\", n\n primeDivisor = map(n)\n print \"Prime divisor(s):\", primeDivisor\n for key in primeDivisor:\n reduce[key].append(n)\n\nfor key, values in reduce.items():\n print \"prime divisor and the sum of integers:\", key, \",\", sum(values)", "Question 2\nUse the matrix-vector multiplication and apply the Map function to this matrix and vector:\n| 1 | 2 | 3 | 4 | | 1 |\n|---|---|---|---| |---|\n| 5 | 6 | 7 | 8 | | 2 |\n| 9 | 10 | 11 | 12 | | 3 |\n| 13 | 14 | 15 | 16 | | 4 |\nThen, identify the key-value pairs that are output of Map.", "import numpy as np\nimport itertools\n\nM = np.array([[1, 2, 3, 4],\n [5, 6, 7, 8],\n [9, 10, 11, 12],\n [13, 14, 15, 16],])\n\nv = np.array([1, 2, 3, 4])\n\ndef mr(M, v):\n t = []\n mr, mc = M.shape\n for i in range(mc):\n for j in range(mr):\n t.append((i, M[i, j] * v[j]))\n\n t = sorted(t, key=lambda x:x[0])\n for x in t:\n print (x[0]+1, x[1])\n\n r = np.zeros((mr, 1))\n for key, vals in itertools.groupby(t, key=lambda x:x[0]):\n vals = [x[1] for x in vals]\n r[key] = sum(vals)\n print '%s, %s' % (key, sum(vals))\n return r.transpose()\n\n#print np.dot(M, v.transpose())\nprint mr(M, v)", "Question 3\nSuppose we have the following relations:", "from IPython.display import Image\nImage(filename='relations.jpeg')", "and we take their natural join. Apply the Map function to the tuples of these relations. Then, construct the elements that are input to the Reduce function. Identify these elements.", "import numpy as np\nimport itertools\n\nR = [(0, 1),\n (1, 2),\n (2, 3)]\n\nS = [(0, 1),\n (1, 2),\n (2, 3)]\n\ndef hash_join(R, S):\n h = {}\n for a, b in R:\n h.setdefault(b, []).append(a)\n\n j = []\n for b, c in S:\n if not h.has_key(b):\n continue\n for r in h[b]:\n j.append( (r, b, c) )\n\n return j\n\ndef mr(R, S):\n m = []\n for a, b in R:\n m.append( (b, ('R', a)) )\n for b, c in S:\n m.append( (b, ('S', c)) )\n\n m = sorted(m, key=lambda x:x[0])\n\n r = []\n for key, vals in itertools.groupby(m, key=lambda x:x[0]):\n vals = [x[1] for x in vals]\n print key, vals\n rs = [x for x in vals if x[0] == 'R']\n ss = [x for x in vals if x[0] == 'S']\n for ri in rs:\n for si in ss:\n r.append( (ri[1], key, si[1]) )\n return r\n\nprint hash_join(R, S)\nprint mr(R, S)", "Question 4\nThe figure below shows two positive points (purple squares) and two negative points (green circles):", "from IPython.display import Image\nImage(filename='svm1.jpeg')", "That is, the training data set consists of:\n- (x1,y1) = ((5,4),+1)\n- (x2,y2) = ((8,3),+1)\n- (x3,y3) = ((7,2),-1)\n- (x4,y4) = ((3,3),-1)\nOur goal is to find the maximum-margin linear classifier for this data. In easy cases, the shortest line between a positive and negative point has a perpendicular bisector that separates the points. If so, the perpendicular bisector is surely the maximum-margin separator. Alas, in this case, the closest pair of positive and negative points, x2 and x3, have a perpendicular bisector that misclassifies x1 as negative, so that won't work.\nThe next-best possibility is that we can find a pair of points on one side (i.e., either two positive or two negative points) such that a line parallel to the line through these points is the maximum-margin separator. In these cases, the limit to how far from the two points the parallel line can get is determined by the closest (to the line between the two points) of the points on the other side. For our simple data set, this situation holds.\nConsider all possibilities for boundaries of this type, and express the boundary as w.x+b=0, such that w.x+b≥1 for positive points x and w.x+b≤-1 for negative points x. Assuming that w = (w1,w2), identify the value of w1, w2, and b.", "import math\nimport numpy as np\n\nP = [((5, 4), 1),\n ((8, 3), 1),\n ((3, 3), -1),\n ((7, 2), -1)]\n\ndef line(pl0, pl1, p):\n dx, dy = pl1[0] - pl0[0], pl1[1] - pl0[1]\n a = abs((pl1[1] - pl0[1]) * p[0] - (pl1[0] - pl0[0]) * p[1] + pl1[0]*pl0[1] - pl0[0]*pl1[1])\n return a / math.sqrt(dx*dx + dy*dy)\n\ndef closest(L, pts):\n dist = [line(L[0][0], L[1][0], x[0]) for x in pts]\n ix = np.argmin(dist)\n return pts[ix], dist[ix]\n\ndef solve(A, B):\n # find the point in B closest to the line through both points in A\n p, d = closest(A, B)\n\n M = np.hstack((\n np.array([list(x[0]) for x in A] + [list(p[0])]),\n np.ones((3, 1))))\n b = np.array([x[1] for x in A] + [p[1]])\n x = np.linalg.solve(M, b)\n return x, d\n\nS = [solve([a for a in P if a[1] == 1], [a for a in P if a [1] == -1]),\n solve([a for a in P if a[1] == -1], [a for a in P if a [1] == 1])]\n\nix = np.argmax([x[1] for x in S])\nx = S[ix][0]\nprint 'w1 = %0.2f' % x[0]\nprint 'w2 = %0.2f' % x[1]\nprint 'b = %0.2f' % x[2]", "Question 5\nConsider the following training set of 16 points. The eight purple squares are positive examples, and the eight green circles are negative examples.", "Image(filename='newsvm4.jpeg')", "We propose to use the diagonal line with slope +1 and intercept +2 as a decision boundary, with positive examples above and negative examples below. However, like any linear boundary for this training set, some examples are misclassified. We can measure the goodness of the boundary by computing all the slack variables that exceed 0, and then using them in one of several objective functions. In this problem, we shall only concern ourselves with computing the slack variables, not an objective function.\nTo be specific, suppose the boundary is written in the form w.x+b=0, where w = (-1,1) and b = -2. Note that we can scale the three numbers involved as we wish, and so doing changes the margin around the boundary. However, we want to consider this specific boundary and margin. Determine the slack for each of the 16 points.", "import numpy as np\n\npos = [(5, 10),\n (7, 10),\n (1, 8),\n (3, 8),\n (7, 8),\n (1, 6),\n (3, 6),\n (3, 4)]\n\nneg = [(5, 8),\n (5, 6),\n (7, 6),\n (1, 4),\n (5, 4),\n (7, 4),\n (1, 2),\n (3, 2)]\n\nC = [(x, 1) for x in pos] + [(x, -1) for x in neg]\n\nw, b = np.array([-1, 1]), -2\n\nd = np.dot(np.array([list(x[0]) for x in C]), w) + b\n\nprint(\"Points\"+\"\\t\"+\"Slack\")\nfor i, m in enumerate(np.sign(d) == np.array([x[1] for x in C])):\n if C[i][1] == 1:\n slack = 1 - d\n else:\n slack = 1 + d\n #print \"%s %d %0.2f %0.2f\" % (C[i][0], C[i][1], d[i], slack[i])\n print \"%s\\t%0.2f\" % (C[i][0], slack[i])", "Question 6\nBelow we see a set of 20 points and a decision tree for classifying the points.", "Image(filename='gold.jpeg')\n\nImage(filename='dectree1.jpeg')", "To be precise, the 20 points represent (Age,Salary) pairs of people who do or do not buy gold jewelry. Age (appreviated A in the decision tree) is the x-axis, and Salary (S in the tree) is the y-axis. Those that do are represented by gold points, and those that do not by green points. The 10 points of gold-jewelry buyers are:\n(28,145), (38,115), (43,83), (50,130), (50,90), (50,60), (50,30), (55,118), (63,88), and (65,140).\nThe 10 points of those that do not buy gold jewelry are:\n(23,40), (25,125), (29,97), (33,22), (35,63), (42,57), (44, 105), (55,63), (55,20), and (64,37).\nSome of these points are correctly classified by the decision tree and some are not. Determine the classification of each point, and then indicate the points that are misclassified.", "A = 0\nS = 1\n\npos = [(28,145),\n (38,115),\n (43,83),\n (50,130),\n (50,90),\n (50,60),\n (50,30),\n (55,118),\n (63,88),\n (65,140)]\n\nneg = [(23,40),\n (25,125),\n (29,97),\n (33,22),\n (35,63),\n (42,57),\n (44, 105),\n (55,63),\n (55,20),\n (64,37)]\n\ndef classify(p):\n if p[A] < 45:\n return p[S] >= 110\n else:\n return p[S] >= 75\n\ne = [p for p, v in zip(pos, [classify(x) for x in pos]) if not v] + \\\n [p for p, v in zip(neg, [classify(x) for x in neg]) if v]\nprint e" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/learned_optimization
docs/notebooks/Part2_CustomTasks.ipynb
apache-2.0
[ "Part 2: Custom Tasks, Task Families, and Performance Improvements\nIn this part, we will look at how to define custom tasks and datasets. We will also consider families of tasks, which are common specifications of meta-learning problems. Finally, we will look at how to efficiently parallelize over tasks during training.\nPrerequisites\nThis document assumes knowledge of JAX which is covered in depth at the JAX Docs.\nIn particular, we would recomend making your way through JAX tutorial 101. We also recommend that you have worked your way through Part 1.", "!pip install git+https://github.com/google/learned_optimization.git\n\nimport numpy as np\nimport jax.numpy as jnp\nimport jax\nfrom matplotlib import pylab as plt\n\nfrom learned_optimization.outer_trainers import full_es\nfrom learned_optimization.outer_trainers import truncated_pes\nfrom learned_optimization.outer_trainers import gradient_learner\nfrom learned_optimization.outer_trainers import truncation_schedule\n\nfrom learned_optimization.tasks import quadratics\nfrom learned_optimization.tasks.fixed import image_mlp\nfrom learned_optimization.tasks import base as tasks_base\nfrom learned_optimization.tasks.datasets import base as datasets_base\n\nfrom learned_optimization.learned_optimizers import base as lopt_base\nfrom learned_optimization.learned_optimizers import mlp_lopt\nfrom learned_optimization.optimizers import base as opt_base\n\nfrom learned_optimization import optimizers\nfrom learned_optimization import eval_training\n\nimport haiku as hk\nimport tqdm", "Defining a custom Dataset\nThe dataset's in this library consists of iterators which yield batches of the corresponding data. For the provided tasks, these dataset have 4 splits of data rather than the traditional 3. We have \"train\" which is data used by the task to train a model, \"inner_valid\" which contains validation data for use when inner training (training an instance of a task). This could be use for, say, picking hparams. \"outer_valid\" which is used to meta-train with -- this is unseen in inner training and thus serves as a basis to train learned optimizers against. \"test\" which can be used to test the learned optimizer with.\nTo make a dataset, simply write 4 iterators with these splits.\nFor performance reasons, creating these iterators cannot be slow.\nThe existing dataset's make extensive use of caching to share iterators across tasks which use the same data iterators.\nTo account for this reuse, it is expected that these iterators are always randomly sampling data and have a large shuffle buffer so as to not run into any sampling issues.", "import numpy as np\n\n\ndef data_iterator():\n bs = 3\n while True:\n batch = {\"data\": np.zeros([bs, 5])}\n yield batch\n\n\n@datasets_base.dataset_lru_cache\ndef get_datasets():\n return datasets_base.Datasets(\n train=data_iterator(),\n inner_valid=data_iterator(),\n outer_valid=data_iterator(),\n test=data_iterator())\n\n\nds = get_datasets()\nnext(ds.train)", "Defining a custom Task\nTo define a custom class, one simply needs to write a base class of Task. Let's look at a simple task consisting of a quadratic task with noisy targets.", "# First we construct data iterators.\ndef noise_datasets():\n\n def _fn():\n while True:\n yield np.random.normal(size=[4, 2]).astype(dtype=np.float32)\n\n return datasets_base.Datasets(\n train=_fn(), inner_valid=_fn(), outer_valid=_fn(), test=_fn())\n\n\nclass MyTask(tasks_base.Task):\n datasets = noise_datasets()\n\n def loss(self, params, rng, data):\n return jnp.sum(jnp.square(params - data))\n\n def init(self, key):\n return jax.random.normal(key, shape=(4, 2))\n\n\ntask = MyTask()\nkey = jax.random.PRNGKey(0)\nkey1, key = jax.random.split(key)\nparams = task.init(key)\n\ntask.loss(params, key1, next(task.datasets.train))", "Meta-training on multiple tasks: TaskFamily\nWhat we have shown previously was meta-training on a single task instance.\nWhile sometimes this is sufficient for a given situation, in many situations we seek to meta-train a meta-learning algorithm such as a learned optimizer on a mixture of different tasks.\nOne path to do this is to simply run more than one meta-gradient computation, each with different tasks, average the gradients, and perform one meta-update.\nThis works great when the tasks are quite different -- e.g. meta-gradients when training a convnet vs a MLP.\nA big negative to this is that these meta-gradient calculations are happening sequentially, and thus making poor use of hardware accelerators like GPU or TPU.\nAs a solution to this problem, we have an abstraction of a TaskFamily to enable better use of hardware. A TaskFamily represents a distribution over a set of tasks and specifies particular samples from this distribution as a pytree of jax types.\nThe function to sample these configurations is called sample, and the function to get a task from the sampled config is task_fn. TaskFamily also optionally contain datasets which are shared for all the Task it creates.\nAs a simple example, let's consider a family of quadratics parameterized by meansquared error to some point which itself is sampled.", "PRNGKey = jnp.ndarray\nTaskParams = jnp.ndarray\n\n\nclass FixedDimQuadraticFamily(tasks_base.TaskFamily):\n \"\"\"A simple TaskFamily with a fixed dimensionality but sampled target.\"\"\"\n\n def __init__(self, dim: int):\n super().__init__()\n self._dim = dim\n self.datasets = None\n\n def sample(self, key: PRNGKey) -> TaskParams:\n # Sample the target for the quadratic task.\n return jax.random.normal(key, shape=(self._dim,))\n\n def task_fn(self, task_params: TaskParams) -> tasks_base.Task:\n dim = self._dim\n\n class _Task(tasks_base.Task):\n\n def loss(self, params, rng, _):\n # Compute MSE to the target task.\n return jnp.sum(jnp.square(task_params - params))\n\n def init(self, key):\n return jax.random.normal(key, shape=(dim,))\n\n return _Task()", "With this task family defined, we can create instances by sampling a configuration and creating a task. This task acts like any other task in that it has an init and a loss function.", "task_family = FixedDimQuadraticFamily(10)\nkey = jax.random.PRNGKey(0)\ntask_cfg = task_family.sample(key)\ntask = task_family.task_fn(task_cfg)\n\nkey1, key = jax.random.split(key)\nparams = task.init(key)\nbatch = None\ntask.loss(params, key, batch)", "To achive speedups, we can now leverage jax.vmap to train multiple task instances in parallel! Depending on the task, this can be considerably faster than serially executing them.", "def train_task(cfg, key):\n task = task_family.task_fn(cfg)\n key1, key = jax.random.split(key)\n params = task.init(key1)\n opt = opt_base.Adam()\n\n opt_state = opt.init(params)\n\n for i in range(4):\n params = opt.get_params(opt_state)\n loss, grad = jax.value_and_grad(task.loss)(params, key, None)\n opt_state = opt.update(opt_state, grad, loss=loss)\n loss = task.loss(params, key, None)\n return loss\n\n\ntask_cfg = task_family.sample(key)\nprint(\"single loss\", train_task(task_cfg, key))\n\nkeys = jax.random.split(key, 32)\ntask_cfgs = jax.vmap(task_family.sample)(keys)\nlosses = jax.vmap(train_task)(task_cfgs, keys)\nprint(\"multiple losses\", losses)", "Because of this ability to apply vmap over task families, this is the main building block for a number of the high level libraries in this package. Single tasks can always be converted to a task family with:", "single_task = image_mlp.ImageMLP_FashionMnist8_Relu32()\ntask_family = tasks_base.single_task_to_family(single_task)", "This wrapper task family has no configuable value and always returns the base task.", "cfg = task_family.sample(key)\nprint(\"config only contains a dummy value:\", cfg)\ntask = task_family.task_fn(cfg)\n# Tasks are the same\nassert task == single_task", "Limitations of TaskFamily\nTask families are designed for, and only work for variation that results in a static computation graph. This is required for jax.vmap to work.\nThis means things like naively changing hidden sizes, or number of layers, activation functions is off the table.\nIn some cases, one can leverage other jax control flow such as jax.lax.cond to select between implementations. For example, one could make a TaskFamily that used one of 2 activation functions. While this works, the resulting vectorized computation could be slow and thus profiling is required to determine if this is a good idea or not.\nIn this code base, we use TaskFamily to mainly parameterize over different kinds of initializations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rajeshb/SelfDrivingCar
CarND-LeNet-Lab/LeNet-Lab.ipynb
mit
[ "LeNet Lab\n\nSource: Yan LeCun\nLoad Data\nLoad the MNIST data, which comes pre-loaded with TensorFlow.\nYou do not need to modify this section.", "from tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"MNIST_data/\", reshape=False)\nX_train, y_train = mnist.train.images, mnist.train.labels\nX_validation, y_validation = mnist.validation.images, mnist.validation.labels\nX_test, y_test = mnist.test.images, mnist.test.labels\n\nassert(len(X_train) == len(y_train))\nassert(len(X_validation) == len(y_validation))\nassert(len(X_test) == len(y_test))\n\nprint()\nprint(\"Image Shape: {}\".format(X_train[0].shape))\nprint()\nprint(\"Training Set: {} samples\".format(len(X_train)))\nprint(\"Validation Set: {} samples\".format(len(X_validation)))\nprint(\"Test Set: {} samples\".format(len(X_test)))", "The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.\nHowever, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.\nIn order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).\nYou do not need to modify this section.", "import numpy as np\n\n# Pad images with 0s\nX_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')\n \nprint(\"Updated Image Shape: {}\".format(X_train[0].shape))", "Visualize Data\nView a sample from the dataset.\nYou do not need to modify this section.", "import random\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nindex = random.randint(0, len(X_train))\nimage = X_train[index].squeeze()\n\nplt.figure(figsize=(1,1))\nplt.imshow(image, cmap=\"gray\")\nprint(y_train[index])", "Preprocess Data\nShuffle the training data.\nYou do not need to modify this section.", "from sklearn.utils import shuffle\n\nX_train, y_train = shuffle(X_train, y_train)", "Setup TensorFlow\nThe EPOCH and BATCH_SIZE values affect the training speed and model accuracy.\nYou do not need to modify this section.", "import tensorflow as tf\n\nEPOCHS = 10\nBATCH_SIZE = 128", "TODO: Implement LeNet-5\nImplement the LeNet-5 neural network architecture.\nThis is the only cell you need to edit.\nInput\nThe LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.\nArchitecture\nLayer 1: Convolutional. The output shape should be 28x28x6.\nActivation. Your choice of activation function.\nPooling. The output shape should be 14x14x6.\nLayer 2: Convolutional. The output shape should be 10x10x16.\nActivation. Your choice of activation function.\nPooling. The output shape should be 5x5x16.\nFlatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.\nLayer 3: Fully Connected. This should have 120 outputs.\nActivation. Your choice of activation function.\nLayer 4: Fully Connected. This should have 84 outputs.\nActivation. Your choice of activation function.\nLayer 5: Fully Connected (Logits). This should have 10 outputs.\nOutput\nReturn the result of the 2nd fully connected layer.", "from tensorflow.contrib.layers import flatten\n\ndef LeNet(x): \n # Hyperparameters\n mu = 0\n sigma = 0.1\n dropout = 0.75\n \n # TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.\n weights = {\n 'wc1': tf.Variable(tf.random_normal([5,5,1,6])),\n 'wc2': tf.Variable(tf.random_normal([5,5,6,16])),\n 'wd1': tf.Variable(tf.random_normal([400, 120])),\n 'wd2': tf.Variable(tf.random_normal([120, 84])),\n 'wd3': tf.Variable(tf.random_normal([84, 10]))}\n \n biases = {\n 'bc1': tf.Variable(tf.zeros(6)),\n 'bc2': tf.Variable(tf.zeros(16)),\n 'bd1': tf.Variable(tf.zeros(120)),\n 'bd2': tf.Variable(tf.zeros(84)),\n 'bd3': tf.Variable(tf.zeros(10))}\n \n conv1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding='VALID')\n conv1 = tf.nn.bias_add(conv1, biases['bc1'])\n \n # TODO: Activation.\n conv1 = tf.nn.relu(conv1)\n \n # TODO: Pooling. Input = 28x28x6. Output = 14x14x6.\n ksize = [1,2,2,1]\n strides = [1,2,2,1]\n padding = 'VALID'\n conv1 = tf.nn.max_pool(conv1, ksize, strides, padding)\n\n # TODO: Layer 2: Convolutional. Output = 10x10x16.\n conv2 = tf.nn.conv2d(conv1, weights['wc2'], strides=[1, 1, 1, 1], padding='VALID')\n conv2 = tf.nn.bias_add(conv2, biases['bc2'])\n \n # TODO: Activation.\n conv2 = tf.nn.relu(conv2)\n \n # TODO: Pooling. Input = 10x10x16. Output = 5x5x16.\n ksize = [1,2,2,1]\n strides = [1,2,2,1]\n padding = 'VALID'\n conv2 = tf.nn.max_pool(conv2, ksize, strides, padding)\n\n # TODO: Flatten. Input = 5x5x16. Output = 400.\n fc0 = flatten(conv2)\n \n # TODO: Layer 3: Fully Connected. Input = 400. Output = 120.\n fc1 = tf.add(tf.matmul(fc0, weights['wd1']), biases['bd1'])\n \n # TODO: Activation.\n fc1 = tf.nn.relu(fc1)\n\n # TODO: Layer 4: Fully Connected. Input = 120. Output = 84.\n fc2 = tf.add(tf.matmul(fc1, weights['wd2']), biases['bd2'])\n \n # TODO: Activation.\n fc2 = tf.nn.relu(fc2)\n\n # TODO: Layer 5: Fully Connected. Input = 84. Output = 10.\n logits = tf.add(tf.matmul(fc2, weights['wd3']), biases['bd3'])\n \n return logits", "Features and Labels\nTrain LeNet to classify MNIST data.\nx is a placeholder for a batch of input images.\ny is a placeholder for a batch of output labels.\nYou do not need to modify this section.", "x = tf.placeholder(tf.float32, (None, 32, 32, 1))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 10)", "Training Pipeline\nCreate a training pipeline that uses the model to classify MNIST data.\nYou do not need to modify this section.", "rate = 0.001\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)", "Model Evaluation\nEvaluate how well the loss and accuracy of the model for a given dataset.\nYou do not need to modify this section.", "correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples", "Train the Model\nRun the training data through the training pipeline to train the model.\nBefore each epoch, shuffle the training set.\nAfter each epoch, measure the loss and accuracy of the validation set.\nSave the model after training.\nYou do not need to modify this section.", "with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n saver.save(sess, 'lenet')\n print(\"Model saved\")", "Evaluate the Model\nOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.\nBe sure to only do this once!\nIf you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.\nYou do not need to modify this section.", "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n\n test_accuracy = evaluate(X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mit-crpg/openmc
examples/jupyter/nuclear-data-resonance-covariance.ipynb
mit
[ "Nuclear Data: Resonance Covariance\nIn this notebook we will explore features of the Python API that allow us to import and manipulate resonance covariance data. A full description of the ENDF-VI and ENDF-VII formats can be found in the ENDF102 manual.", "%matplotlib inline\nimport os\nfrom pprint import pprint\nimport shutil\nimport subprocess\nimport urllib.request\n\nimport h5py\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport openmc.data", "ENDF: Resonance Covariance Data\nLet's download the ENDF/B-VII.1 evaluation for $^{157}$Gd and load it in:", "# Download ENDF file\nurl = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157'\nfilename, headers = urllib.request.urlretrieve(url, 'gd157.endf')\n\n# Load into memory\ngd157_endf = openmc.data.IncidentNeutron.from_endf(filename, covariance=True)\ngd157_endf", "We can access the parameters contained within File 32 in a similar manner to the File 2 parameters from before.", "gd157_endf.resonance_covariance.ranges[0].parameters[:5]", "The newly created object will contain multiple resonance regions within gd157_endf.resonance_covariance.ranges. We can access the full covariance matrix from File 32 for a given range by:", "covariance = gd157_endf.resonance_covariance.ranges[0].covariance", "This covariance matrix currently only stores the upper triangular portion as covariance matrices are symmetric. Plotting the covariance matrix:", "plt.imshow(covariance, cmap='seismic',vmin=-0.008, vmax=0.008)\nplt.colorbar()", "The correlation matrix can be constructed using the covariance matrix and also give some insight into the relations among the parameters.", "corr = np.zeros([len(covariance),len(covariance)])\nfor i in range(len(covariance)):\n for j in range(len(covariance)):\n corr[i, j]=covariance[i, j]/covariance[i, i]**(0.5)/covariance[j, j]**(0.5)\nplt.imshow(corr, cmap='seismic',vmin=-1.0, vmax=1.0)\nplt.colorbar()\n", "Sampling and Reconstruction\nThe covariance module also has the ability to sample a new set of parameters using the covariance matrix. Currently the sampling uses numpy.multivariate_normal(). Because parameters are assumed to have a multivariate normal distribution this method doesn't not currently guarantee that sampled parameters will be positive.", "rm_resonance = gd157_endf.resonances.ranges[0]\nn_samples = 5\nsamples = gd157_endf.resonance_covariance.ranges[0].sample(n_samples)\ntype(samples[0])\n", "The sampling routine requires the incorporation of the openmc.data.ResonanceRange for the same resonance range object. This allows each sample itself to be its own openmc.data.ResonanceRange with a new set of parameters. Looking at some of the sampled parameters below:", "print('Sample 1')\nsamples[0].parameters[:5]\n\nprint('Sample 2')\nsamples[1].parameters[:5]", "We can reconstruct the cross section from the sampled parameters using the reconstruct method of openmc.data.ResonanceRange. For more on reconstruction see the Nuclear Data example notebook.", "gd157_endf.resonances.ranges\n\nenergy_range = [rm_resonance.energy_min, rm_resonance.energy_max]\nenergies = np.logspace(np.log10(energy_range[0]),\n np.log10(energy_range[1]), 10000)\nfor sample in samples:\n xs = sample.reconstruct(energies)\n elastic_xs = xs[2]\n plt.loglog(energies, elastic_xs)\nplt.xlabel('Energy (eV)')\nplt.ylabel('Cross section (b)')", "Subset Selection\nAnother capability of the covariance module is selecting a subset of the resonance parameters and the corresponding subset of the covariance matrix. We can do this by specifying the value we want to discriminate and the bounds within one energy region. Selecting only resonances with J=2:", "lower_bound = 2; # inclusive\nupper_bound = 2; # inclusive\nrm_res_cov_sub = gd157_endf.resonance_covariance.ranges[0].subset('J',[lower_bound,upper_bound])\nrm_res_cov_sub.file2res.parameters[:5]", "The subset method will also store the corresponding subset of the covariance matrix", "rm_res_cov_sub.covariance\ngd157_endf.resonance_covariance.ranges[0].covariance.shape\n", "Checking the size of the new covariance matrix to be sure it was sampled properly:", "old_n_parameters = gd157_endf.resonance_covariance.ranges[0].parameters.shape[0]\nold_shape = gd157_endf.resonance_covariance.ranges[0].covariance.shape\nnew_n_parameters = rm_res_cov_sub.file2res.parameters.shape[0]\nnew_shape = rm_res_cov_sub.covariance.shape\nprint('Number of parameters\\nOriginal: '+str(old_n_parameters)+'\\nSubet: '+str(new_n_parameters)+'\\nCovariance Size\\nOriginal: '+str(old_shape)+'\\nSubset: '+str(new_shape))\n", "And finally, we can sample from the subset as well", "samples_sub = rm_res_cov_sub.sample(n_samples)\nsamples_sub[0].parameters[:5]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vzg100/Post-Translational-Modification-Prediction
.ipynb_checkpoints/Lysine Acetylation -svc-checkpoint.ipynb
mit
[ "Template for test", "from pred import Predictor\nfrom pred import sequence_vector\nfrom pred import chemical_vector", "Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using K acytelation.\nTraining data is from CUCKOO group and benchmarks are from dbptm.", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/k_acetylation.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"K\", imbalance_function=i, random_data=0)\n y.supervised_training(\"svc\")\n y.benchmark(\"Data/Benchmarks/acet.csv\", \"K\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/k_acetylation.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"K\", imbalance_function=i, random_data=1)\n x.supervised_training(\"svc\")\n x.benchmark(\"Data/Benchmarks/acet.csv\", \"K\")\n del x\n", "Chemical Vector", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nfor i in par:\n print(\"y\", i)\n y = Predictor()\n y.load_data(file=\"Data/Training/k_acetylation.csv\")\n y.process_data(vector_function=\"chemical\", amino_acid=\"K\", imbalance_function=i, random_data=0)\n y.supervised_training(\"svc\")\n y.benchmark(\"Data/Benchmarks/acet.csv\", \"K\")\n del y\n print(\"x\", i)\n x = Predictor()\n x.load_data(file=\"Data/Training/k_acetylation.csv\")\n x.process_data(vector_function=\"chemical\", amino_acid=\"K\", imbalance_function=i, random_data=1)\n x.supervised_training(\"svc\")\n x.benchmark(\"Data/Benchmarks/acet.csv\", \"K\")\n del x\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/tfx
docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Simple TFX Pipeline for Vertex Pipelines\n<div class=\"devsite-table-wrapper\"><table class=\"tfo-notebook-buttons\" align=\"left\">\n<td><a target=\"_blank\" href=\"https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"/>View on TensorFlow.org</a></td>\n<td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb\">\n<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n<td><a target=\"_blank\" href=\"https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a></td>\n<td><a href=\"https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a></td>\n<td><a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_simple.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Run in Google Cloud Vertex AI Workbench</a></td>\n</table></div>\n\nThis notebook-based tutorial will create a simple TFX pipeline and run it using\nGoogle Cloud Vertex Pipelines. This notebook is based on the TFX pipeline\nwe built in\nSimple TFX Pipeline Tutorial.\nIf you are not familiar with TFX and you have not read that tutorial yet, you\nshould read it before proceeding with this notebook.\nGoogle Cloud Vertex Pipelines helps you to automate, monitor, and govern\nyour ML systems by orchestrating your ML workflow in a serverless manner. You\ncan define your ML pipelines using Python with TFX, and then execute your\npipelines on Google Cloud. See\nVertex Pipelines introduction\nto learn more about Vertex Pipelines.\nThis notebook is intended to be run on\nGoogle Colab or on\nAI Platform Notebooks. If you\nare not using one of these, you can simply click \"Run in Google Colab\" button\nabove.\nSet up\nBefore you run this notebook, ensure that you have following:\n- A Google Cloud Platform project.\n- A Google Cloud Storage bucket. See\nthe guide for creating buckets.\n- Enable\nVertex AI and Cloud Storage API.\nPlease see\nVertex documentation\nto configure your GCP project further.\nInstall python packages\nWe will install required Python packages including TFX and KFP to author ML\npipelines and submit jobs to Vertex Pipelines.", "# Use the latest version of pip.\n!pip install --upgrade pip\n!pip install --upgrade \"tfx[kfp]<2\"", "Did you restart the runtime?\nIf you are using Google Colab, the first time that you run\nthe cell above, you must restart the runtime by clicking\nabove \"RESTART RUNTIME\" button or using \"Runtime > Restart\nruntime ...\" menu. This is because of the way that Colab\nloads packages.\nIf you are not on Colab, you can restart runtime with following cell.", "# docs_infra: no_execute\nimport sys\nif not 'google.colab' in sys.modules:\n # Automatically restart kernel after installs\n import IPython\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Login in to Google for this notebook\nIf you are running this notebook on Colab, authenticate with your user account:", "import sys\nif 'google.colab' in sys.modules:\n from google.colab import auth\n auth.authenticate_user()", "If you are on AI Platform Notebooks, authenticate with Google Cloud before\nrunning the next section, by running\nsh\ngcloud auth login\nin the Terminal window (which you can open via File > New in the\nmenu). You only need to do this once per notebook instance.\nCheck the package versions.", "import tensorflow as tf\nprint('TensorFlow version: {}'.format(tf.__version__))\nfrom tfx import v1 as tfx\nprint('TFX version: {}'.format(tfx.__version__))\nimport kfp\nprint('KFP version: {}'.format(kfp.__version__))", "Set up variables\nWe will set up some variables used to customize the pipelines below. Following\ninformation is required:\n\nGCP Project id. See\nIdentifying your project id.\nGCP Region to run pipelines. For more information about the regions that\nVertex Pipelines is available in, see the\nVertex AI locations guide.\nGoogle Cloud Storage Bucket to store pipeline outputs.\n\nEnter required values in the cell below before running it.", "GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS\nGOOGLE_CLOUD_REGION = '' # <--- ENTER THIS\nGCS_BUCKET_NAME = '' # <--- ENTER THIS\n\nif not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):\n from absl import logging\n logging.error('Please set all required parameters.')", "Set gcloud to use your project.", "!gcloud config set project {GOOGLE_CLOUD_PROJECT}\n\nPIPELINE_NAME = 'penguin-vertex-pipelines'\n\n# Path to various pipeline artifact.\nPIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for users' Python module.\nMODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for input data.\nDATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# This is the path where your model will be pushed for serving.\nSERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\nprint('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))", "Prepare example data\nWe will use the same\nPalmer Penguins dataset\nas\nSimple TFX Pipeline Tutorial.\nThere are four numeric features in this dataset which were already normalized\nto have range [0,1]. We will build a classification model which predicts the\nspecies of penguins.\nWe need to make our own copy of the dataset. Because TFX ExampleGen reads\ninputs from a directory, we need to create a directory and copy dataset to it\non GCS.", "!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/", "Take a quick look at the CSV file.", "!gsutil cat {DATA_ROOT}/penguins_processed.csv | head", "Create a pipeline\nTFX pipelines are defined using Python APIs. We will define a pipeline which\nconsists of three components, CsvExampleGen, Trainer and Pusher. The pipeline\nand model definition is almost the same as\nSimple TFX Pipeline Tutorial.\nThe only difference is that we don't need to set metadata_connection_config\nwhich is used to locate\nML Metadata database. Because\nVertex Pipelines uses a managed metadata service, users don't need to care\nof it, and we don't need to specify the parameter.\nBefore actually define the pipeline, we need to write a model code for the\nTrainer component first.\nWrite model code.\nWe will use the same model code as in the\nSimple TFX Pipeline Tutorial.", "_trainer_module_file = 'penguin_trainer.py'\n\n%%writefile {_trainer_module_file}\n\n# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple\n\nfrom typing import List\nfrom absl import logging\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow_transform.tf_metadata import schema_utils\n\n\nfrom tfx import v1 as tfx\nfrom tfx_bsl.public import tfxio\n\nfrom tensorflow_metadata.proto.v0 import schema_pb2\n\n_FEATURE_KEYS = [\n 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'\n]\n_LABEL_KEY = 'species'\n\n_TRAIN_BATCH_SIZE = 20\n_EVAL_BATCH_SIZE = 10\n\n# Since we're not generating or creating a schema, we will instead create\n# a feature spec. Since there are a fairly small number of features this is\n# manageable for this dataset.\n_FEATURE_SPEC = {\n **{\n feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)\n for feature in _FEATURE_KEYS\n },\n _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)\n}\n\n\ndef _input_fn(file_pattern: List[str],\n data_accessor: tfx.components.DataAccessor,\n schema: schema_pb2.Schema,\n batch_size: int) -> tf.data.Dataset:\n \"\"\"Generates features and label for training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n schema: schema of the input data.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n tfxio.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_LABEL_KEY),\n schema=schema).repeat()\n\n\ndef _make_keras_model() -> tf.keras.Model:\n \"\"\"Creates a DNN Keras model for classifying penguin data.\n\n Returns:\n A Keras Model.\n \"\"\"\n # The model below is built with Functional API, please refer to\n # https://www.tensorflow.org/guide/keras/overview for all API options.\n inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]\n d = keras.layers.concatenate(inputs)\n for _ in range(2):\n d = keras.layers.Dense(8, activation='relu')(d)\n outputs = keras.layers.Dense(3)(d)\n\n model = keras.Model(inputs=inputs, outputs=outputs)\n model.compile(\n optimizer=keras.optimizers.Adam(1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()])\n\n model.summary(print_fn=logging.info)\n return model\n\n\n# TFX Trainer will call this function.\ndef run_fn(fn_args: tfx.components.FnArgs):\n \"\"\"Train the model based on given args.\n\n Args:\n fn_args: Holds args used to train the model as name/value pairs.\n \"\"\"\n\n # This schema is usually either an output of SchemaGen or a manually-curated\n # version provided by pipeline author. A schema can also derived from TFT\n # graph if a Transform component is used. In the case when either is missing,\n # `schema_from_feature_spec` could be used to generate schema from very simple\n # feature_spec, but the schema returned would be very primitive.\n schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)\n\n train_dataset = _input_fn(\n fn_args.train_files,\n fn_args.data_accessor,\n schema,\n batch_size=_TRAIN_BATCH_SIZE)\n eval_dataset = _input_fn(\n fn_args.eval_files,\n fn_args.data_accessor,\n schema,\n batch_size=_EVAL_BATCH_SIZE)\n\n model = _make_keras_model()\n model.fit(\n train_dataset,\n steps_per_epoch=fn_args.train_steps,\n validation_data=eval_dataset,\n validation_steps=fn_args.eval_steps)\n\n # The result of the training should be saved in `fn_args.serving_model_dir`\n # directory.\n model.save(fn_args.serving_model_dir, save_format='tf')", "Copy the module file to GCS which can be accessed from the pipeline components.\nBecause model training happens on GCP, we need to upload this model definition. \nOtherwise, you might want to build a container image including the module file\nand use the image to run the pipeline.", "!gsutil cp {_trainer_module_file} {MODULE_ROOT}/", "Write a pipeline definition\nWe will define a function to create a TFX pipeline.", "# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and\n# slightly modified because we don't need `metadata_path` argument.\n\ndef _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,\n module_file: str, serving_model_dir: str,\n ) -> tfx.dsl.Pipeline:\n \"\"\"Creates a three component penguin pipeline with TFX.\"\"\"\n # Brings data into the pipeline.\n example_gen = tfx.components.CsvExampleGen(input_base=data_root)\n\n # Uses user-provided Python function that trains a model.\n trainer = tfx.components.Trainer(\n module_file=module_file,\n examples=example_gen.outputs['examples'],\n train_args=tfx.proto.TrainArgs(num_steps=100),\n eval_args=tfx.proto.EvalArgs(num_steps=5))\n\n # Pushes the model to a filesystem destination.\n pusher = tfx.components.Pusher(\n model=trainer.outputs['model'],\n push_destination=tfx.proto.PushDestination(\n filesystem=tfx.proto.PushDestination.Filesystem(\n base_directory=serving_model_dir)))\n\n # Following three components will be included in the pipeline.\n components = [\n example_gen,\n trainer,\n pusher,\n ]\n\n return tfx.dsl.Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n components=components)", "Run the pipeline on Vertex Pipelines.\nWe used LocalDagRunner which runs on local environment in\nSimple TFX Pipeline Tutorial.\nTFX provides multiple orchestrators to run your pipeline. In this tutorial we\nwill use the Vertex Pipelines together with the Kubeflow V2 dag runner.\nWe need to define a runner to actually run the pipeline. You will compile\nyour pipeline into our pipeline definition format using TFX APIs.", "# docs_infra: no_execute\nimport os\n\nPIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'\n\nrunner = tfx.orchestration.experimental.KubeflowV2DagRunner(\n config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),\n output_filename=PIPELINE_DEFINITION_FILE)\n# Following function will write the pipeline definition to PIPELINE_DEFINITION_FILE.\n_ = runner.run(\n _create_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n data_root=DATA_ROOT,\n module_file=os.path.join(MODULE_ROOT, _trainer_module_file),\n serving_model_dir=SERVING_MODEL_DIR))", "The generated definition file can be submitted using kfp client.", "# docs_infra: no_execute\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import pipeline_jobs\nimport logging\nlogging.getLogger().setLevel(logging.INFO)\n\naiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)\n\njob = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,\n display_name=PIPELINE_NAME)\njob.submit()", "Now you can visit the link in the output above or visit 'Vertex AI > Pipelines'\nin Google Cloud Console to see the\nprogress." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]