repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
ES-DOC/esdoc-jupyterhub | notebooks/uhh/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: UHH\nSource ID: SANDBOX-2\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:41\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'uhh', 'sandbox-2', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ingmarschuster/rkhs_demo | RKHS_in_Machine_learning.ipynb | gpl-3.0 | [
"$\\newcommand{\\Reals}{\\mathbb{R}}\n\\newcommand{\\Nats}{\\mathbb{N}}\n\\newcommand{\\PDK}{\\mathbf{k}}\n\\newcommand{\\IS}{\\mathcal{X}} \n\\newcommand{\\FM}{\\Phi} \n\\newcommand{\\Gram}{K} \n\\newcommand{\\RKHS}{\\mathcal{H}}\n\\newcommand{\\prodDot}[2]{\\left\\langle#1,#2\\right\\rangle}\n\\DeclareMathOperator{\\argmin}{arg\\,min}\n\\DeclareMathOperator{\\argmax}{arg\\,max}$\nReproducing Kernel Hilbert Spaces in Machine Learning",
"from __future__ import division, print_function, absolute_import\nfrom IPython.display import SVG, display, Image\n\nimport numpy as np, scipy as sp, pylab as pl, matplotlib.pyplot as plt, scipy.stats as stats, sklearn, sklearn.datasets\nfrom scipy.spatial.distance import squareform, pdist, cdist\n\nimport distributions as dist #commit 480cf98 of https://github.com/ingmarschuster/distributions",
"Motivation: Feature engineering in Machine Learning\nIn ML, one classic way to handle nonlinear relations in data (non-numerical data) with linear methods is to map the data to so called features using a nonlinear function $\\FM$ (a function mapping from the data to a vector space).",
"display(Image(filename=\"monomials.jpg\", width=200))",
"In the Feature Space (the domain of $\\FM$), we can then use linear algebra, such as angles, norms and inner products, inducing nonlinear operations on the Input Space (codomain of $\\FM$). The central thing we need, apart from the feature space being a vector space are inner products, as they induce norms and a possibility to measure angles.\nSimple classification algorithm using only inner products\nSay we are given data points from the mixture of two distributions with densities $p_0,p_1$:\n$$x_i \\sim w_0 p_0 + w_1 p_1$$ and labels $l_i = 0$ if $x_i$ was actually generated by $p_0$, $l_i = 1$ otherwise. A very simple classification algorithm would be to compute the mean in feature space $\\mu_c = \\frac{1}{N_c} \\sum_{l_i = c} \\FM(x_i)$ for $c \\in {0,1}$ and assign a test point to the class which is the most similar in terms of inner product. In other words, the decision function \n$ f_d:\\IS\\to{0,1}$ is defined by\n$$f_d(x) = \\argmax_{c\\in{0,1}} \\prodDot{\\FM(x)}{\\mu_c}$$",
"data = np.vstack([stats.multivariate_normal(np.array([-2,2]), np.eye(2)*1.5).rvs(100),\n stats.multivariate_normal(np.ones(2)*2, np.eye(2)*1.5).rvs(100)])\ndistr_idx = np.r_[[0]*100, [1]*100]\n\nfor (idx, c, marker) in [(0,'r', (0,3,0)), (1, \"b\", \"x\")]:\n pl.scatter(*data[distr_idx==idx,:].T, c=c, alpha=0.4, marker=marker)\n pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.2, head_length=0.2, fc=c, ec=c)\npl.show()",
"Remarkably, all positive definite functions are inner products in some feature space.\nTheorem Let $\\IS$ be a nonempty set and let $\\PDK:\\IS\\times\\IS \\to \\Reals$, called a kernel. The following two conditions are equivalent:\n* $\\PDK$ is symmetric and positive semi definite (psd), i.e. for all $x_1, \\dots, x_m \\in \\IS$ the matrix $\\Gram$ defined by with entries $\\Gram_{i,j} = \\PDK(x_i, x_j)$ is symmetric psd\n$\\FM$ is called the Feature Map and $\\RKHS_\\FM$ the feature space.\n* there exists a map $\\FM: \\IS \\to \\RKHS_\\FM$ to a hilbert space $\\RKHS_\\FM$ such that $$\\PDK(x_i, x_j) = \\prodDot{\\FM(x_i)}{\\FM(x_j)}_\\RKHS$$\nIn other words, $\\PDK$ computes the inner product in some $\\RKHS_\\FM$. We furthermore endow the space with the norm induced by the dot product $\\|\\cdot\\|_\\PDK$. From the second condition, it is easy to construct $\\PDK$ given $\\FM$. A general construction for $\\FM$ given $\\PDK$ is not as trivial but still elementary.\nConstruction of the canonical feature map (Aronszajn map)\nWe give the canonical construction of $\\FM$ from $\\PDK$, together with a definition of the inner product in the new space. In particular, the feature for each $x \\in \\IS$ will be a function from $\\IS$ to $\\Reals$.\n$$\\FM:\\IS \\to \\Reals^\\IS\\\n\\FM(x) = \\PDK(\\cdot, x)$$\nThus for the linear kernel $\\PDK(x,y)=\\prodDot{x}{y}$ we have $\\FM(x) = \\prodDot{\\cdot}{x}$ and for the gaussian kernel $\\PDK(x,y)=\\exp\\left(-0.5{\\|x-y\\|^2}/{\\sigma^2}\\right)$ we have $\\FM(x) = \\exp\\left(-0.5{\\|\\cdot -x \\|^2}/{\\sigma^2}\\right)$.\nNow $\\RKHS$ is the closure of $\\FM(\\IS)$ wrt. linear combinations of its elements:\n$$\\RKHS = \\left{f: f(\\cdot)=\\sum_{i=1}^m a_i \\PDK(\\cdot, x_i) \\right} = span(\\FM(\\IS))$$\nwhere $m \\in \\Nats, a_i \\in \\Reals, x \\in \\IS$. This makes $\\RKHS$ a vector space over $\\Reals$.\nFor $f(\\cdot)=\\sum_{i=1}^m a_i \\PDK(\\cdot, x_i)$ and $g(\\cdot)=\\sum_{i=1}^m' b_j \\PDK(\\cdot, x'j)$ we define the inner product in $\\RKHS$ as\n$$\\prodDot{f}{g} = \\sum{i=1}^m \\sum_{i=1}^m' b_j a_i \\PDK(x'_j, x_i)$$\nIn particular, for $f(\\cdot) = \\PDK(\\cdot,x), g(\\cdot) = \\PDK(\\cdot,x')$, we have $\\prodDot{f}{g} = \\prodDot{ \\PDK(\\cdot,x)}{ \\PDK(\\cdot,x')}=\\PDK(x,x')$. This is called the reproducing property of the kernel of this particular $\\RKHS$.\nObviously $\\RKHS$ with this inner product satisfies all conditions for a hilbert space: the inner product is\n* positive definite\n* linear in its first argument\n* symmetric\nwhich is why $\\RKHS$ is called a Reproducing Kernel Hilbert Space (RKHS).\nInner product classification algorithm is equivalent to a classification with KDEs\nThe naive classification algorithm we outlined earlier is actually equivalent to a simple classification algorithm using KDEs. For concreteness, let $\\PDK(x,x') = { {{(2\\pi )^{-N/2}\\left|\\Sigma \\right|^{-1/2}}}\\exp({-{ 0.5}(x-x' )^{\\top }\\Sigma ^{-1}(x-x' )}})$.\nThen the mean in feature space of data from distribution $c$ with the canonical feature map is \n$$\\mu_c = \\frac{1}{N_c} \\sum_{l_i = c} \\FM(x_i) = \\frac{1}{N_c} \\sum_{l_i = c} \\PDK(x_i, \\cdot) = \\frac{1}{N_c} \\sum_{l_i = c} { {{(2\\pi )^{-N/2}\\left|\\Sigma \\right|^{-1/2}}}\\exp({-{ 0.5}(\\cdot-x_i )^{\\top }\\Sigma ^{-1}(\\cdot-x_i )}})$$\nwhich is just a KDE of the density $p_c$ using gaussian kernels with parameter $\\Sigma$. For a test point $y$ that we want to classify, its feature is just $\\PDK(y,\\cdot) = { {{(2\\pi )^{-N/2}\\left|\\Sigma \\right|^{-1/2}}}\\exp({-{ 0.5}(y-\\cdot )^{\\top }\\Sigma ^{-1}(y-\\cdot )}})$. Its inner product with the class mean is just the evaluation of the KDE at $y$ (because of the reproducing property). Thus each point is classified as belonging to the class for which the KDE estimate assigns highest probability to $y$.",
"class Kernel(object):\n def mean_emb(self, samps):\n return lambda Y: self.k(samps, Y).sum()/len(samps)\n \n def mean_emb_len(self, samps):\n return self.k(samps, samps).sum()/len(samps**2)\n \n def k(self, X, Y):\n raise NotImplementedError()\n\nclass FeatMapKernel(Kernel):\n def __init__(self, feat_map):\n self.features = feat_map\n \n def features_mean(self, samps):\n return self.features(samps).mean(0)\n \n def mean_emb_len(self, samps):\n featue_space_mean = self.features_mean(samps)\n return featue_space_mean.dot(featue_space_mean)\n \n def mean_emb(self, samps):\n featue_space_mean = self.features(samps).mean(0)\n return lambda Y: self.features(Y).dot(featue_space_mean)\n \n def k(self, X, Y):\n gram = self.features(X).dot(self.features(Y).T)\n return gram\n\nclass LinearKernel(FeatMapKernel):\n def __init__(self):\n FeatMapKernel.__init__(self, lambda x: x)\n\nclass GaussianKernel(Kernel):\n def __init__(self, sigma):\n self.width = sigma\n \n def k(self, X, Y=None):\n assert(len(np.shape(X))==2)\n \n # if X=Y, use more efficient pdist call which exploits symmetry\n if Y is None:\n sq_dists = squareform(pdist(X, 'sqeuclidean'))\n else:\n assert(len(np.shape(Y))==2)\n assert(np.shape(X)[1]==np.shape(Y)[1])\n sq_dists = cdist(X, Y, 'sqeuclidean')\n \n K = exp(-0.5 * (sq_dists) / self.width ** 2)\n return K\n\nclass StudentKernel(Kernel):\n def __init__(self, s2, df):\n self.dens = dist.mvt(0,s2,df)\n \n def k(self, X,Y=None):\n if Y is None:\n sq_dists = squareform(pdist(X, 'sqeuclidean'))\n else:\n assert(len(np.shape(Y))==2)\n assert(np.shape(X)[1]==np.shape(Y)[1])\n sq_dists = cdist(X, Y, 'sqeuclidean')\n dists = np.sqrt(sq_dists)\n return exp(self.dens.logpdf(dists.flatten())).reshape(dists.shape)\n\n\ndef kernel_mean_inner_prod_classification(samps1, samps2, kernel):\n mean1 = kernel.mean_emb(samps1)\n norm_mean1 = kernel.mean_emb_len(samps1)\n mean2 = kernel.mean_emb(samps2)\n norm_mean2 = kernel.mean_emb_len(samps2)\n \n def sim(test):\n return (mean1(test) - mean2(test))\n \n def decision(test):\n if sim(test) >= 0:\n return 1\n else:\n return 0\n \n return sim, decision\n\n\n\ndef apply_to_mg(func, *mg):\n #apply a function to points on a meshgrid\n x = np.vstack([e.flat for e in mg]).T\n return np.array([func(i.reshape((1,2))) for i in x]).reshape(mg[0].shape)\n\ndef plot_with_contour(samps, data_idx, cont_func, method_name, delta = 0.025, pl = pl):\n x = np.arange(samps.T[0].min()-delta, samps.T[1].max()+delta, delta)\n y = np.arange(samps.T[1].min()-delta, samps.T[1].max()+delta, delta)\n X, Y = np.meshgrid(x, y)\n Z = apply_to_mg(cont_func, X,Y)\n Z = Z.reshape(X.shape)\n\n\n # contour labels can be placed manually by providing list of positions\n # (in data coordinate). See ginput_manual_clabel.py for interactive\n # placement.\n fig = pl.figure()\n pl.pcolormesh(X, Y, Z > 0, cmap=pl.cm.Pastel2)\n pl.contour(X, Y, Z, colors=['k', 'k', 'k'],\n linestyles=['--', '-', '--'],\n levels=[-.5, 0, .5])\n pl.title('Decision for '+method_name)\n #plt.clabel(CS, inline=1, fontsize=10)\n for (idx, c, marker) in [(0,'r', (0,3,0)), (1, \"b\", \"x\")]:\n pl.scatter(*data[distr_idx==idx,:].T, c=c, alpha=0.7, marker=marker)\n\n pl.show()\n \nfor (kern_name, kern) in [(\"Linear\", LinearKernel()), \n (\"Student-t\", StudentKernel(0.1,10)), \n (\"Gauss\", GaussianKernel(0.1))\n ]:\n (sim, dec) = kernel_mean_inner_prod_classification(data[distr_idx==1,:], data[distr_idx==0,:], kern)\n plot_with_contour(data, distr_idx, sim, 'Inner Product classif. '+kern_name, pl = plt)",
"Obviously, the linear kernel might be enough already for this simple dataset. Another interesting observation however is that the Student-t based kernel is more robust to outliers of the datasets and yields a lower variance classification algorithm as compared to using a Gaussian kernel. This is to be expected, given the fatter tails of the Student-t. Now lets look at a dataset that is not linearly separable.",
"data, distr_idx = sklearn.datasets.make_circles(n_samples=400, factor=.3, noise=.05)\n\nfor (kern_name, kern) in [(\"Linear\", LinearKernel()), \n (\"Stud\", StudentKernel(0.1,10)), \n (\"Gauss1\", GaussianKernel(0.1)),\n ]:\n (sim, dec) = kernel_mean_inner_prod_classification(data[distr_idx==1,:], data[distr_idx==0,:], kern)\n plot_with_contour(data, distr_idx, sim, 'Inner Product classif. '+kern_name, pl = plt)",
"In this dataset, the Linear kernel is not a good choice, simply because the classes are not separable linearly in input space. Gaussian and Student-t work, and Student-t shows slightly better robustness properties.\nThe kernel mean map\nOne of the objects we looked at, the kernel mean map is particularly interesting. In fact, for so called characteristic kernels, the integral\n$$\\mu_\\PDK = \\int \\PDK(x, \\cdot) \\mathrm{d}p(x)$$\npreserves all information about the distribution $p(x)$, like e.g. characteristic functions, while not even assuming that $x$ is numerical (remember that the only restriction on the codomain of $\\PDK$ is that it be nonempty).\nWhen we have an estimator $\\widehat{\\mu}\\PDK$ and a function $f \\in \\RKHS$, we have\n$$\\int f(x) \\mathrm{d}p(x) \\approx \\prodDot{f}{\\widehat{\\mu}\\PDK}$$\nbecause of the reproducing property. In other words, integration for functions in the RKHS $\\RKHS$ is just a dot product in $\\RKHS$. The unfortunate restriction being that functions in the RKHS are real valued (or complex valued in the more general case).\nLemma If a kernel is strictly positive definite, it is characteristic.\nThus the Gaussian and Student-t kernel and many other kernels induced by densities will be characteristic. The embedding gives us a distance measure between distributions for free.\nDefinition The Maximum Mean Discrepancy of two distributions given their kernel mean embeddings $\\mu_\\PDK, \\nu_\\PDK$ is defined by\n$$\\mathrm{MMD}(\\mu_\\PDK, \\nu_\\PDK) = \\|\\mu_\\PDK - \\nu_\\PDK\\|^2_\\PDK$$\nFurthermore, marginalization, the chain rule and Bayes rule can all be represented as operations in an RKHS. Also, independence tests for random variables operating in a RKHS have been proposed."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
charlesll/Examples | PySolExExample.ipynb | gpl-2.0 | [
"Charles Le Losq\nFriday, 22 May 2015\nModified the 16 June 2015.\nGeophysical Laboratory,\nCarnegie Institution for Science\nExample of use of pysolex, the library using the software SolEx developped by Fred Witham, University of Bristol.\nFrom the header of the solex.cpp:\n\"An object that calculates the solubility relationships for basalt given basalt compositional parameters and PT conditions.\"\nDifference with the SolEx GUI software:\n- C++ code modified to perform only equilibrium calculation at given P, T, X. \n- The core library is written in C++, only functional modifications have been made, the calculation stays as it was written by Fred Witham.\n- The C++ library SolEx is not directly wrapped to keep it running without modification and for simplifying the wrapping process. Rather than a direct wrapping, I created a function pysolex.cpp that is wrapped using SWIG. Then, in the setup.py file, the distutils function calls the wrapped version of pysolex.cpp with knowing that it has to use libsolex.a! This allows to keep a clean C++ code for the SolEx core library that can be used independently of Python.\n- Results are in a SWIG array... For now, I only found a way to have them: directly reading the memory block were they are stored with ctypes... It's working but such low level in Python seems not ideal (=nice) to me?\nLet's start and call the useful libraries:",
"import numpy as np #for handling the numbers/arrays\nimport scipy #for a lot of scientific stuffs, optimisation, interpolation functions, etc.\nimport pysolex #we import the pysolex library!\nimport ctypes #for reading the SWIG array output\nimport matplotlib #for doing nice graphs\nimport matplotlib.pyplot as plt # For doing the plots\nfrom pylab import * #for doing nice graphs\n# We need this in this case because we use IPython notebooks, but not needed in a .py code\n%matplotlib inline ",
"Ok, now we start by defining a starting chemical composition of interest (warning to put dots so that Python interpretes the numbers as float and not int!):",
"CO2 = 4890.\nH2O = 3.67\nPPMS0 = 3560.\nPPMCl0 = 1572.\nSi = 52.12\nAl = 16.38\nFe = 5.82\nCa = 10.72\nMg = 6.71\nNa = 2.47\nK = 1.89",
"Now we define the value of pi to use, which is the parameterisation described in Dixon(1997). This value is not used unless piswitch is set to 1.",
"pi = -0.05341",
"Below is the wt% of SiO2 that was used for the SiO2 only parameterisation. From Fred: \"I think (I should check but dont have the code available now) that this is no-longer used as the SiO2 wt% is calculated from Si mol%.\"",
"SiO2 = 52.12 #should match the good value",
"Now the pisol switch: if 1 solubility is based on the value of pi, if 0 it is used on SiO2 wt% only.",
"pisol = 0",
"Now a second switch to determine if pisol is given or should be calculated: if 1 then the value pi is used for solubility calculations, and if 0 pi is calculated from the composition of the melt.",
"piswitch = 0",
"Now let's fix the other parameters of our system: temperature, pressure, and oxygen fugacity! We will start with a fixed oxygen fugacity, pressure and temperature:",
"T = 1153.; #in K\nP = 100.; #in bars\nNNO = 1.8;",
"Ok, for this single calculation, SolEx has a flag for terminal output, but it is not working in the Notebook. So let's put the flag to 0:",
"flagout = bool(0) #has to be a bool value",
"And the function to call is pysolex.pyex:",
"output = pysolex.pyex(H2O,CO2,PPMS0,PPMCl0,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,P,NNO)\noutput",
"Oh... Here is the result contained in output => a SWIG Object containing double number... To read it, I only found one way online: reading directly the memory block allocated to this Ojbect using ctypes:",
"rawPointer = output.__long__() # we're going to read the \"address\"\npC = ctypes.cast(rawPointer, ctypes.POINTER( ctypes.c_double )) # and we read the array stored at this address\nprint((\"wt% H2O = \"+str(pC[0])))\nprint((\"PPM CO2 = \"+str(pC[1])))\nprint((\"PPM S = \"+str(pC[2])))\nprint((\"PPM Cl = \"+str(pC[3])))\nprint((\"Vol% Exsolve = \"+str(pC[4])))\nprint((\"XV H2O (mass) = \"+str(pC[5])))\nprint((\"XV CO2 (mass) = \"+str(pC[6])))\nprint((\"XV S (mass) = \"+str(pC[7])))\nprint((\"XV Cl (mass) = \"+str(pC[8])))\nprint((\"molV H2O (mass) = \"+str(pC[9])))\nprint((\"molV CO2 (mass) = \"+str(pC[10])))\nprint((\"molV S (mass) = \"+str(pC[11])))\nprint((\"molV Cl (mass) = \"+str(pC[12])))\n",
"Ok, it's working. Now let's complicate the case. Let's imagine that we have a closed-system degassing, going from P = 4000 to 100 bar, as you can do in SolEx. You will write something like that to reproduce the calculation in Python:",
"Pint = np.arange(100,4000,100) #start, stop, step\nrev_Pint = Pint[::-1] # To have the first values being the highest ones\nresults = np.zeros((len(Pint),13)) # For storing the results",
"We create a loop in which we will call pysolex for doing the calculation:",
"for i in range(len(rev_Pint)):\n output = pysolex.pyex(H2O,CO2,PPMS0,PPMCl0,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,rev_Pint[i],NNO)\n rawPointer = output.__long__()\n pC = ctypes.cast(rawPointer, ctypes.POINTER( ctypes.c_double ))\n \n results[i,0] = pC[0] #wt% water\n results[i,1] = pC[1] #co2 ppm\n results[i,2] = pC[2] #S ppm\n results[i,3] = pC[3] #Cl ppm\n results[i,4] = pC[4] #EXSOLVE\n results[i,5] = pC[5] #XV H2O\n results[i,6] = pC[6] #XV CO2\n results[i,7] = pC[7] #XV S\n results[i,8] = pC[8] #XV Cl\n results[i,9] = pC[9] #molV H2O\n results[i,10] = pC[10] #molV CO2\n results[i,11] = pC[11] #molV S\n results[i,12] = pC[12] #molV Cl\n",
"Done! Let's do a nice graph for those results:",
"plt.plot(rev_Pint[:],results[:,0])\nplt.xlabel(\"Pressure, bars\", fontsize = 14)\nplt.ylabel(\"Water content in melt, wt%\", fontsize = 14)\nplt.title(\"Fig. 1: Water concentration vs pressure, closed system\",fontsize = 14,fontweight = \"bold\")\nplt.text(2000,2,(\"T =\"+str(T)+\"\\nNNO =\"+str(NNO)),fontsize = 14)",
"Let's do the same thing for the CO2 now:",
"plt.plot(rev_Pint[:],results[:,1])\nplt.xlabel(\"Pressure, bars\", fontsize = 14)\nplt.ylabel(\"CO$_2$ content in melt, ppm\", fontsize = 14)\nplt.title(\"Fig. 2: CO$_2$ concentration vs pressure, closed system\",fontsize = 14,fontweight = \"bold\")\nplt.text(1000,800,(\"T =\"+str(T)+\"\\nNNO =\"+str(NNO)),fontsize = 14)",
"Now let's make the case of an open system. Easy, we will just take the H2O, CO2, S and Cl values from the past output to input them in the next...",
"for i in range(len(rev_Pint)):\n if i == 0:\n output = pysolex.pyex(H2O,CO2,PPMS0,PPMCl0,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,rev_Pint[i],NNO)\n else:\n H2O = results[i-1,0]\n CO2 = results[i-1,1]\n PPMS = results[i-1,2]\n PPMCl = results[i-1,3]\n output = pysolex.pyex(H2O,CO2,PPMS,PPMCl,Si,Al,Fe,Ca,Mg,Na,K,pi,SiO2,pisol,piswitch,flagout,T,rev_Pint[i],NNO)\n \n rawPointer = output.__long__()\n pC = ctypes.cast(rawPointer, ctypes.POINTER( ctypes.c_double ))\n \n results[i,0] = pC[0] #wt% water\n results[i,1] = pC[1] #co2 ppm\n results[i,2] = pC[2] #S ppm\n results[i,3] = pC[3] #Cl ppm\n results[i,4] = pC[4] #EXSOLVE\n results[i,5] = pC[5] #XV H2O\n results[i,6] = pC[6] #XV CO2\n results[i,7] = pC[7] #XV S\n results[i,8] = pC[8] #XV Cl\n results[i,9] = pC[9] #molV H2O\n results[i,10] = pC[10] #molV CO2\n results[i,11] = pC[11] #molV S\n results[i,12] = pC[12] #molV Cl",
"We can now plot the results as we did for the closed system case:",
"plt.plot(rev_Pint[:],results[:,0])\nplt.xlabel(\"Pressure, bars\", fontsize = 14)\nplt.ylabel(\"Water content in melt, wt%\", fontsize = 14)\nplt.title(\"Fig. 3: Open system \",fontsize = 14,fontweight = \"bold\")\nplt.text(2000,2,(\"T =\"+str(T)+\"\\nNNO =\"+str(NNO)))\n\nplt.plot(rev_Pint[:],results[:,1])\nplt.xlabel(\"Pressure, bars\", fontsize = 14)\nplt.ylabel(\"CO$_2$ content in melt, ppm\", fontsize = 14)\nplt.title(\"Fig. 4: Open system \",fontsize = 14,fontweight = \"bold\")\n#plt.text(1000,800,(\"T =\"+str(T)+\"\\nNNO =\"+str(NNO)))\n\nresults[0,0]\n\nrev_Pint[0]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sunilmallya/dl-twitch-series | E3_finetuning_randall_not_randall.ipynb | apache-2.0 | [
"Build a model to detect if Randall is in the image or not!\nRandall or Not\nDataset\n\nRandall : s3://ranman-selfies\nNot Randall: http://vis-www.cs.umass.edu/lfw/lfw.tgz (celeb faces)\n\nmost of the code is borrowed from\nhttps://github.com/dmlc/mxnet-notebooks/blob/master/python/tutorials/finetune-CNN-catsvsdogs.ipynb",
"# helper functions\nimport mxnet as mx\nimport os, urllib\n\ndef download(url):\n filename = url.split(\"/\")[-1]\n if not os.path.exists(filename):\n urllib.urlretrieve(url, filename)\n \ndef get_model(prefix, epoch):\n download(prefix+'-symbol.json')\n download(prefix+'-%04d.params' % (epoch,))\n\nget_model('http://data.mxnet.io/models/imagenet/resnet/152-layers/resnet-152', 0)\nsym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0)",
"Download the .rec files from\nhttps://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_train.lst.rec\nhttps://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_valid.lst.rec",
"download('https://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_train.lst.rec')\ndownload('https://s3.amazonaws.com/smallya-test/randallnotrandall/rnr_valid.lst.rec')\n\n# Data Iterators for cats vs dogs dataset\n\nimport mxnet as mx\n\ndef get_iterators(batch_size, data_shape=(3, 224, 224)):\n train = mx.io.ImageRecordIter(\n path_imgrec = './rnr_train.lst.rec', \n data_name = 'data',\n label_name = 'softmax_label',\n batch_size = batch_size,\n data_shape = data_shape,\n shuffle = True,\n rand_crop = True,\n rand_mirror = True)\n val = mx.io.ImageRecordIter(\n path_imgrec = './rnr_valid.lst.rec',\n data_name = 'data',\n label_name = 'softmax_label',\n batch_size = batch_size,\n data_shape = data_shape,\n rand_crop = False,\n rand_mirror = False)\n return (train, val)\n\ndef get_fine_tune_model(symbol, arg_params, num_classes, layer_name='flatten0'):\n \"\"\"\n symbol: the pre-trained network symbol\n arg_params: the argument parameters of the pre-trained model\n num_classes: the number of classes for the fine-tune datasets\n layer_name: the layer name before the last fully-connected layer\n \"\"\"\n all_layers = sym.get_internals()\n net = all_layers[layer_name + '_output']\n net = mx.symbol.FullyConnected(data=net, num_hidden=num_classes, name='fc1')\n net = mx.symbol.SoftmaxOutput(data=net, name='softmax')\n new_args = dict({k:arg_params[k] for k in arg_params if 'fc1' not in k})\n return (net, new_args)\n\nnum_classes = 2 # RANDALL OR NOT\n(new_sym, new_args) = get_fine_tune_model(sym, arg_params, num_classes)\n\nimport logging\nhead = '%(asctime)-15s %(message)s'\nlogging.basicConfig(level=logging.DEBUG, format=head)\n\ndef fit(symbol, arg_params, aux_params, train, val, batch_size, num_gpus=1, num_epoch=1):\n devs = [mx.gpu(i) for i in range(num_gpus)] # replace mx.gpu by mx.cpu for CPU training\n mod = mx.mod.Module(symbol=new_sym, context=devs)\n mod.bind(data_shapes=train.provide_data, label_shapes=train.provide_label)\n mod.init_params(initializer=mx.init.Xavier(rnd_type='gaussian', factor_type=\"in\", magnitude=2))\n mod.set_params(new_args, aux_params, allow_missing=True)\n mod.fit(train, val, \n num_epoch=num_epoch,\n batch_end_callback = mx.callback.Speedometer(batch_size, 10), \n kvstore='device',\n optimizer='sgd',\n optimizer_params={'learning_rate':0.009},\n eval_metric='acc')\n \n return mod\n\nnum_classes = 2 # This is binary classification (Randall vs not Randall)\nbatch_per_gpu = 16\nnum_gpus = 4\n(new_sym, new_args) = get_fine_tune_model(sym, arg_params, num_classes)\n\nbatch_size = batch_per_gpu * num_gpus\n(train, val) = get_iterators(batch_size)\nmod = fit(new_sym, new_args, aux_params, train, val, batch_size, num_gpus)\n\n#metric = mx.metric.Accuracy()\n#mod_score = mod.score(val, metric)\n#print mod_score\n\nprefix = 'resnet-mxnet-rnr'\nepoch = 1\nmc = mod.save_checkpoint(prefix, epoch)\n\n# load the model, make sure you have executed previous cells to train\nimport cv2\ndshape = [('data', (1,3,224,224))]\n\ndef load_model(s_fname, p_fname):\n \"\"\"\n Load model checkpoint from file.\n :return: (arg_params, aux_params)\n arg_params : dict of str to NDArray\n Model parameter, dict of name to NDArray of net's weights.\n aux_params : dict of str to NDArray\n Model parameter, dict of name to NDArray of net's auxiliary states.\n \"\"\"\n symbol = mx.symbol.load(s_fname)\n save_dict = mx.nd.load(p_fname)\n arg_params = {}\n aux_params = {}\n for k, v in save_dict.items():\n tp, name = k.split(':', 1)\n if tp == 'arg':\n arg_params[name] = v\n if tp == 'aux':\n aux_params[name] = v\n return symbol, arg_params, aux_params\n\nmodel_symbol = \"resnet-mxnet-rnr-symbol.json\"\nmodel_params = \"resnet-mxnet-rnr-0001.params\"\nsym, arg_params, aux_params = load_model(model_symbol, model_params)\nmod = mx.mod.Module(symbol=sym)\n\n# bind the model and set training == False; Define the data shape\nmod.bind(for_training=False, data_shapes=dshape)\nmod.set_params(arg_params, aux_params)",
"Lets see if we can predict if that's a Randall image\n<img src=\"https://d0.awsstatic.com/Developer%20Marketing/evangelists/evangelist-bio-randall-hunt.png\"/>",
"import urllib2\nimport numpy as np\n\nfrom collections import namedtuple\nBatch = namedtuple('Batch', ['data'])\n\ndef preprocess_image(img, show_img=False):\n '''\n convert the image to a numpy array\n '''\n img = cv2.resize(img, (224, 224))\n img = np.swapaxes(img, 0, 2)\n img = np.swapaxes(img, 1, 2) \n img = img[np.newaxis, :] \n return img\n\nurl = 'https://d0.awsstatic.com/Developer%20Marketing/evangelists/evangelist-bio-randall-hunt.png'\nreq = urllib2.urlopen(url)\n\nimage = np.asarray(bytearray(req.read()), dtype=\"uint8\")\nimage = cv2.imdecode(image, cv2.IMREAD_COLOR)\nimg = preprocess_image(image)\n\nmod.forward(Batch([mx.nd.array(img)]))\n\n# predict\nprob = mod.get_outputs()[0].asnumpy()\nlabels = [\"Randall\", \"Not Randall\"]\nprint labels[prob.argmax()], max(prob[0])",
"yay! that's Randall\nLets visualize the filters",
"## Feature extraction\nimport matplotlib.pyplot as plt\nimport cv2\nimport numpy as np\n# define a simple data batch\nfrom collections import namedtuple\nBatch = namedtuple('Batch', ['data'])\n\ndef get_image(url, show=False):\n # download and show the image\n fname = mx.test_utils.download(url)\n img = cv2.cvtColor(cv2.imread(fname), cv2.COLOR_BGR2RGB)\n if img is None:\n return None\n if show:\n plt.imshow(img)\n plt.axis('off')\n # convert into format (batch, RGB, width, height)\n img = cv2.resize(img, (224, 224))\n img = np.swapaxes(img, 0, 2)\n img = np.swapaxes(img, 1, 2)\n img = img[np.newaxis, :]\n return img\n\n# list the last 10 layers\nall_layers = sym.get_internals()\nprint all_layers.list_outputs()[-10:]\n\n#fe_sym = all_layers['flatten0_output']\nfe_sym = all_layers['conv0_output']\nfe_mod = mx.mod.Module(symbol=fe_sym, context=mx.cpu(), label_names=None)\nfe_mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))])\nfe_mod.set_params(arg_params, aux_params)\n\nurl = 'https://d0.awsstatic.com/Developer%20Marketing/evangelists/evangelist-bio-randall-hunt.png'\nimg = get_image(url)\nfe_mod.forward(Batch([mx.nd.array(img)]))\nfeatures = fe_mod.get_outputs()[0].asnumpy()\nprint features.shape \n\nfrom PIL import Image\nimport numpy as np\n\n%matplotlib inline\n\nw, h = 112, 112\n\n# Plot helpers\ndef plots(ims, figsize=(12,6), rows=1, interp=False, titles=None):\n \n\n if type(ims[0]) is np.ndarray:\n ims = np.array(ims).astype(np.uint8)\n #print ims.shape\n #if (ims.shape[-1] != 3):\n # ims = ims.transpose((0,2,3,1))\n f = plt.figure(figsize=figsize)\n for i in range(len(ims)):\n sp = f.add_subplot(rows, len(ims)//rows, i+1)\n sp.axis('Off')\n if titles is not None:\n sp.set_title(titles[i], fontsize=16)\n plt.imshow(ims[i], interpolation=None if interp else 'none')\n \ndef plots_idx(idx, titles=None):\n plots([features[0][i] for i in idx])\n\n \nfname = mx.test_utils.download(url)\nimg = cv2.cvtColor(cv2.imread(fname), cv2.COLOR_BGR2RGB)\nplt.axis('off')\nplt.imshow(img)\n\nplots_idx(range(0,5))\nplots_idx(range(5,10))\nplots_idx(range(10,15))\n\n\n#data = np.zeros((h, w, 3), dtype=np.uint8)\n#img = Image.fromarray(features[0][:2028], 'RGB')\n#img.show()",
"<img src=\"https://d0.awsstatic.com/Developer%20Marketing/evangelists/evangelist-bio-randall-hunt.png\"/>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
idekerlab/graph-services | notebooks/DEMO.ipynb | mit | [
"cxMate Service DEMO\nBy Ayato Shimada, Mitsuhiro Eto\nThis DEMO shows\n1. detect communities using an igraph's community detection algorithm\n2. paint communities (nodes and edges) in different colors\n3. perform layout using graph-tool's sfdp algorithm",
"# Tested on:\n!python --version",
"Send CX to service using requests module\nServices are built on a server\nYou don't have to construct graph libraries in your local environment.\nIt is very easy to use python-igraph and graph-tools.\nIn order to send CX\n\nrequests : to send CX file to service in Python. (curl also can be used.)\njson : to convert object to a CX formatted string.",
"import requests\nimport json\n\nurl_community = 'http://localhost:80' # igraph's community detection service URL\nurl_layout = 'http://localhost:3000' # graph-tool's layout service URL\nheaders = {'Content-type': 'application/json'}",
"Network used for DEMO\nThis DEMO uses yeastHQSubnet.cx as original network.\n- 2924 nodes\n- 6827 edges\n<img src=\"example1.png\" alt=\"Drawing\" style=\"width: 500px;\"/>\n\n1. igraph community detection and color generator service\nIn order to detect communities, igraph's community detection service can be used. \nHow to use the service on Jupyter Notebook\n\nopen the CX file using open()\nset parameters in dictionary format. (About parameters, see the document of service.)\npost the CX data to URL of service using requests.post()",
"data = open('./yeastHQSubnet.cx') # 1.\nparameter = {'type': 'leading_eigenvector', 'clusters': 5, 'palette': 'husl'} # 2.\nr = requests.post(url=url_community, headers=headers, data=data, params=parameter) # 3.",
"What happened?\nOutput contains\ngraph with community membership + color assignment for each group.\n- node1 : group 1, red\n- node2 : group 1, red\n- node3 : group 2, green\n...\nYou don't have to create your own color palette manually.\nTo save and look the output data, you can use r.json()['data']\nNote\n- When you use this output as input of next service, you must use json.dumps(r.json()['data'])\n- You must replace single quotation to double quotation in output file.",
"import re\nwith open('output1.cx', 'w') as f:\n # single quotation -> double quotation\n output = re.sub(string=str(r.json()['data']), pattern=\"'\", repl='\"')\n f.write(output)",
"3. graph-tool layout service\nIn order to perform layout algorithm, graph-tool's layout algorithm service can be used. \nC++ optimized parallel, community-structure-aware layout algorithms\nYou can use the community structure as a parameter for layout, and result reflects its structure.\nYou can use graph-tool's service in the same way as igraph's service.\nBoth input and output of cxMate service are CX, NOT igraph's object, graph-tool's object and so on.\nSo, you don't have to convert igraph object to graph-tools object.\n<img src=\"service.png\" alt=\"Drawing\" style=\"width: 750px;\"/>\nHow to use the service on Jupyter Notebook\n\nopen the CX file using json.dumps(r.json()['data'])\nset parameters in dictionary format. (About parameters, see the document of service.)\npost the CX data to URL of service using requests.post()",
"data2 = json.dumps(r.json()['data']) # 1.\nparameter = {'only-layout': False, 'groups': 'community'} # 2. \nr2 = requests.post(url=url_layout, headers=headers, data=data2, params=parameter) # 3.",
"Save .cx file\nTo save and look the output data, you can use r.json()['data']",
"import re\nwith open('output2.cx', 'w') as f:\n # single quotation -> double quotation\n output = re.sub(string=str(r2.json()['data']), pattern=\"'\", repl='\"')\n f.write(output)",
"Color Palette\nIf you want to change color of communities, you can do it easily.\nMany color palettes of seaborn can be used. (See http://seaborn.pydata.org/tutorial/color_palettes.html)",
"%matplotlib inline\nimport seaborn as sns, numpy as np\nfrom ipywidgets import interact, FloatSlider",
"Default Palette\nWithout setting parameter 'palette', 'husl' is used as color palette.",
"def show_husl(n):\n sns.palplot(sns.color_palette('husl', n))\nprint('palette: husl')\ninteract(show_husl, n=10);",
"Other palettes",
"def show_pal0(palette):\n sns.palplot(sns.color_palette(palette, 24))\ninteract(show_pal0, palette='deep muted pastel bright dark colorblind'.split());\n\nsns.choose_colorbrewer_palette('qualitative');\n\nsns.choose_colorbrewer_palette('sequential');"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
indranilsinharoy/PyZDDE | Examples/IPNotebooks/01 Notes on ipzCaptureWindow functions.ipynb | mit | [
"Using ipzCaptureWindow and ipzCaptureWindow2 for embedding graphic analysis windows into notebook\n<img src=\"https://raw.githubusercontent.com/indranilsinharoy/PyZDDE/master/Doc/Images/articleBanner_01_ipzcapturewindow.png\" height=\"230\">\nPlease feel free to e-mail any corrections, comments and suggestions to the author (Indranil Sinharoy) \nLast updated: 12/27/2015\nLicense: Creative Commons Attribution 4.0 International\n<font color='red'>NOTE:</font>\nThe current version of OpticStudio doesn't support function the function ipzCaptureWindow(). This is because the dataitem GetMetaFile has been deprecated. However, the function ipzCaptureWindowLQ() works fine as it uses ZPL macro to grap a snapshot of an open graphics window.\nBoth functions works just fine in Zemax version 13.2 and earlier.\nWhy are there two functions for doing the same thing?\nIn this notebook we explore the different use cases of the functions ipzCaptureWindow() and ipzCaptureWindowLQ().\n<font color='red'>ipzCaptureWindowLQ()</font>\nipzCaptureWindowLQ(), which takes as input a window number, uses ZPL macros to retrieve a \"screenshot\" from Zemax main application (note the window open in the main application is retrieved and not in the DDE server). The number is assigned by Zemax when an analysis window is opened in Zemax, and is not tied to any specific analysis. The quality of the rendered image is dependent on the quality of the JPEG image provided by Zemax. Frankly, I haven't found the quality to be any qood. The \"LQ\" in the name of the function indicates low-quality. For using ipzCaptureWindowLQ() we will also need to specify the ZPL macro path to PyZDDE. \n<font color='red'>ipzCaptureWindow()</font>\nipzCaptureWindow() generally produces better quality graphis. It takes as input the 3-letter string code for the type of analysis. It uses zGetMetaFile() to request ZEMAX to output a windows metafile (standard or enhanced), resizes and converts into a PNG image using ImageMagick and embeds into a notebook cell. Note that since it uses zGetMetaFile(), the analysis window from the DDE server is retrieved unlike in the case of ipzCaptureWindowLQ(). We can also ask ipzCaptureWindow() not to render the PNG image; instead to return the pixel array as a Numpy ndarray. We can then use any graphic rendering tool such as matplotlib to manipulate, render and annotate the graphic. This notebook mainly focuses on ipzCaptureWindow(). \nipzCaptureWindowLQ() is much quicker than ipzCaptureWindow() as it doesn't have to do any intermediate image conversions. It is meant for quick interactive use (provided the ZPL macro path is provided to PyZDDE as explained later). ipzCaptureWindow() provides more flexibility and better quality images, though it is a little slower than ipzCaptureWindowLQ().\nNote that it is assumed that PyZDDE is in the Python search path.\nImport PyZDDE and create a pyzdde communication object",
"import os\nimport matplotlib.pyplot as plt\nimport pyzdde.zdde as pyz\n%matplotlib inline\n\nl = pyz.createLink() # create a DDE link object for communication",
"Load a lens file",
"zfile = os.path.join(l.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx')\nl.zLoadFile(zfile)",
"Perform a quick-focus",
"l.zQuickFocus()",
"Example of a Layout plot\nUsing ipzCaptureWindow to directly embed a Layout plot into the notebook.",
"l.ipzCaptureWindow('Lay', percent=15, gamma=0.4)",
"Why do we need to set gamma? \n\n\n Is there one gamma value good for all analysis window rendering? \n\n\nUpto Zemax13 there was no way to control othe thickness of the lines produced by ZEMAX for the metafiles. Generally the lines produced were very thin and the rescaled version would be too light to be visible. One way in which this problem was addressed is to lowpass filter the original image, rescale and then use a gamma value less than one during the conversion from metafile to PNG. This is probably not the optimal solution. One obvious side effect is that the black text becomes very thick and ugly.\nInstead of embedding the figure directly, we can also get a pixel array using PyZDDE. Plotting the returned array using matplotlib may allow more control and annotation options as shown below:",
"arr = l.ipzCaptureWindow('Lay', percent=15, gamma=0.08, retArr=True)",
"Now that we have the pixel array, we can either use the convenience function provided in PyZDDE to make a quick plot, or make our own figure and plot as we want it.\nLet's first see how we can use the convenience function, imshow(), provided by PyZDDE to make a cropped plot. The functions takes as input the pixel array, a tuple indicating the number of pixels to crop from the left, right, top, bottom sides of the pixel array, a tuple indicating the matplotlib figure size (optional), and a title string (optional).",
"pyz.imshow(arr, cropBorderPixels=(5, 5, 1, 90), figsize=(10,10), title='Layout Plot')",
"Next, we will create a figure and direct PyZDDE to render the Layout plot in the provided figure and axes. We can then annotate the figure as we like.\nBut first we will get some first-order properties of the lens",
"l.ipzGetFirst()\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\n\n# Render the array\npyz.imshow(arr, cropBorderPixels=(5, 5, 1, 90), fig=fig, faxes=ax)\n\nax.set_title('Layout plot', fontsize=16)\n# Annotate Lens numbers\nax.text(41, 70, \"L1\", fontsize=12)\nax.text(98, 105, \"L2\", fontsize=12)\nax.text(149, 89, \"L3\", fontsize=12) \n\n# Annotate the lens with radius of curvature information\ncol = (0.08,0.08,0.08)\ns1_r = 1.0/l.zGetSurfaceData(1,2)\nax.annotate(\"{:0.2f}\".format(s1_r), (37, 232), (8, 265), fontsize=12, \n arrowprops=dict(arrowstyle=\"->\", linewidth=0.45, color=col, relpos=(0.5,0.5)))\ns2_r = 1.0/l.zGetSurfaceData(2,2)\nax.annotate(\"{:0.2f}\".format(s2_r), (47, 232), (50, 265), fontsize=12, \n arrowprops=dict(arrowstyle=\"->\", linewidth=0.45, color=col, relpos=(0.5,0.5)))\ns6_r = 1.0/l.zGetSurfaceData(6,2)\nax.annotate(\"{:0.2f}\".format(s6_r), (156, 218), (160, 251), fontsize=12, \n arrowprops=dict(arrowstyle=\"->\", linewidth=0.45, color=col, relpos=(0.5,0.5)))\nax.text(5, 310, \"Cooke Triplet, EFL = {} mm, F# = {}, Total track length = {} mm\"\n .format(50, 5, 60.177), fontsize=14) \nplt.show()",
"Example of Ray Fan plot",
"l.ipzCaptureWindow('Ray', percent=17, gamma=0.55)\n\nrarr = l.ipzCaptureWindow('Ray', percent=25, gamma=0.15, retArr=True)\n\nfig = plt.figure(figsize=(10,10))\nax = fig.add_subplot(111)\npyz.imshow(rarr, cropBorderPixels=(5, 5, 48, 170), fig=fig, faxes=ax)\nax.set_title('Transverse Ray Fan Plot for OBJ: 20.00 (deg)', fontsize=14)\nplt.show()",
"Example of Spot diagram",
"l.ipzCaptureWindow('Spt', percent=16, gamma=0.5)\n\nsptd = l.ipzCaptureWindow('Spt', percent=25, gamma=0.15, retArr=True)\n\nfig = plt.figure(figsize=(8,8))\nax = fig.add_subplot(111)\npyz.imshow(sptd, cropBorderPixels=(150, 150, 30, 180), fig=fig, faxes=ax)\nax.set_title('Spot diagram for OBJ: 20.00 (deg)', fontsize=14)\nplt.show()",
"Examples of using ipzCaptureWindowLQ() function in Zemax 13.2 or earlier\nipzCaptureWindowLQ() is useful for quickly capturing a graphic window, and embedding into an IPython notebook or QtConsole.\nIn order to use this function, please copy the ZPL macros from \"PyZDDE\\ZPLMacros\" to the macro directory where Zemax is expecting the ZPL macros to be (i.e. the folder set in Zemax->Preference->Folders->ZPL).\nFor this particular example, the macro folder path is set to \"C:\\PROGRAMSANDEXPERIMENTS\\ZEMAX\\Macros\"",
"l.zSetMacroPath(r\"C:\\PROGRAMSANDEXPERIMENTS\\ZEMAX\\Macros\")\n\nl.ipzCaptureWindowLQ(1)",
"Note that the above command didn't work, because we need to push the lens from the DDE server to the Zemax main window first. Then we also need to open each window.",
"l.zPushLens()",
"Now open the layout analysis window in Zemax. Assuming that this is the first analysis window that has been open, Zemax would have assigned the number 1 to it.",
"l.ipzCaptureWindowLQ(1)",
"Open the MTF analysis window in Zemax now.",
"l.ipzCaptureWindowLQ(2)\n\npyz.closeLink()",
"Examples of using ipzCaptureWindowLQ() function in Zemax 14 or later (OpticStudio)\nIn order to do this experiment, a new instance of Zemax 15 was opened, and new link created.",
"l = pyz.createLink()\n\nzfile = os.path.join(l.zGetPath()[1], 'Sequential', 'Objectives', 'Cooke 40 degree field.zmx')\nl.zLoadFile(zfile)\n\nl.zPushLens()\n\n# Set the macro path\nl.zSetMacroPath(r\"C:\\PROGRAMSANDEXPERIMENTS\\ZEMAX\\Macros\")",
"Now open the layout analysis window in OpticStudio as before.",
"l.ipzCaptureWindowLQ(1)",
"Open FFT MTF analysis window",
"l.ipzCaptureWindowLQ(2)",
"Next, the FFT PSF analysis window was opened",
"l.ipzCaptureWindowLQ(3)",
"A few others .... just for show",
"l.ipzCaptureWindowLQ(4) # Shaded Model\n\nl.close()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
google/jax-md | notebooks/customizing_potentials_cookbook.ipynb | apache-2.0 | [
"<a href=\"https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/customizing_potentials_cookbook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCustomizing Potentials in JAX MD\nThis cookbook was contributed by Carl Goodrich.",
"#@title Imports & Utils\n!pip install -q git+https://www.github.com/google/jax-md\n\n\nimport numpy as onp\n\nimport jax.numpy as np\nfrom jax import random\nfrom jax import jit, grad, vmap, value_and_grad\nfrom jax import lax\nfrom jax import ops\n\nfrom jax.config import config\nconfig.update(\"jax_enable_x64\", True)\n\nfrom jax_md import space, smap, energy, minimize, quantity, simulate, partition\n\nfrom functools import partial\nimport time\n\nf32 = np.float32\nf64 = np.float64\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams.update({'font.size': 16})\n#import seaborn as sns \n#sns.set_style(style='white')\n\ndef format_plot(x, y): \n plt.grid(True)\n plt.xlabel(x, fontsize=20)\n plt.ylabel(y, fontsize=20)\n \ndef finalize_plot(shape=(1, 0.7)):\n plt.gcf().set_size_inches(\n shape[0] * 1.5 * plt.gcf().get_size_inches()[1], \n shape[1] * 1.5 * plt.gcf().get_size_inches()[1])\n plt.tight_layout()\n\ndef calculate_bond_data(displacement_or_metric, R, dr_cutoff, species=None):\n if( not(species is None)):\n assert(False)\n \n metric = space.map_product(space.canonicalize_displacement_or_metric(displacement))\n dr = metric(R,R)\n\n dr_include = np.triu(np.where(dr<dr_cutoff, 1, 0)) - np.eye(R.shape[0],dtype=np.int32)\n index_list=np.dstack(np.meshgrid(np.arange(N), np.arange(N), indexing='ij'))\n\n i_s = np.where(dr_include==1, index_list[:,:,0], -1).flatten()\n j_s = np.where(dr_include==1, index_list[:,:,1], -1).flatten()\n ij_s = np.transpose(np.array([i_s,j_s]))\n\n bonds = ij_s[(ij_s!=np.array([-1,-1]))[:,1]]\n lengths = dr.flatten()[(ij_s!=np.array([-1,-1]))[:,1]]\n\n return bonds, lengths\n\ndef plot_system(R,box_size,species=None,ms=20):\n R_plt = onp.array(R)\n\n if(species is None):\n plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms)\n else:\n for ii in range(np.amax(species)+1):\n Rtemp = R_plt[species==ii]\n plt.plot(Rtemp[:, 0], Rtemp[:, 1], 'o', markersize=ms)\n\n plt.xlim([0, box_size])\n plt.ylim([0, box_size])\n plt.xticks([], [])\n plt.yticks([], [])\n\n finalize_plot((1,1))\n \nkey = random.PRNGKey(0)",
"Prerequisites\nThis cookbook assumes a working knowledge of Python and Numpy. The concept of broadcasting is particularly important both in this cookbook and in JAX MD. \nWe also assume a basic knowlege of JAX, which JAX MD is built on top of. Here we briefly review a few JAX basics that are important for us:\n\n\njax.vmap allows for automatic vectorization of a function. What this means is that if you have a function that takes an input x and returns an output y, i.e. y = f(x), then vmap will transform this function to act on an array of x's and return an array of y's, i.e. Y = vmap(f)(X), where X=np.array([x1,x2,...,xn]) and Y=np.array([y1,y2,...,yn]). \n\n\njax.grad employs automatic differentiation to transform a function into a new function that calculates its gradient, for example: dydx = grad(f)(x). \n\n\njax.lax.scan allows for efficient for-loops that can be compiled and differentiated over. See here for more details.\n\n\nRandom numbers are different in JAX. The details aren't necessary for this cookbook, but if things look a bit different, this is why.\n\n\nThe basics of user-defined potentials\nCreate a user defined potential function to use throughout this cookbook\nHere we create a custom potential that has a short-ranged, non-diverging repulsive interaction and a medium-ranged Morse-like attractive interaction. It takes the following form:\n\\begin{equation}\nV(r) =\n\\begin{cases}\n \\frac{1}{2} k (r-r_0)^2 - D_0,& r < r_0\\\n D_0\\left( e^{-2\\alpha (r-r_0)} -2 e^{-\\alpha(r-r_0)}\\right), & r \\geq r_0\n\\end{cases}\n\\end{equation}\nand has 4 parameters: $D_0$, $\\alpha$, $r_0$, and $k$.",
"def harmonic_morse(dr, D0=5.0, alpha=5.0, r0=1.0, k=50.0, **kwargs):\n U = np.where(dr < r0, \n 0.5 * k * (dr - r0)**2 - D0,\n D0 * (np.exp(-2. * alpha * (dr - r0)) - 2. * np.exp(-alpha * (dr - r0)))\n )\n return np.array(U, dtype=dr.dtype)",
"plot $V(r)$.",
"drs = np.arange(0,3,0.01)\nU = harmonic_morse(drs)\nplt.plot(drs,U)\nformat_plot(r'$r$', r'$V(r)$')\nfinalize_plot()",
"Calculate the energy of a system of interacting particles\nWe now want to calculate the energy of a system of $N$ spheres in $d$ dimensions, where each particle interacts with every other particle via our user-defined function $V(r)$. The total energy is\n\\begin{equation}\nE_\\text{total} = \\sum_{i<j}V(r_{ij}),\n\\end{equation}\nwhere $r_{ij}$ is the distance between particles $i$ and $j$. \nOur first task is to set up the system by specifying the $N$, $d$, and the size of the simulation box. We then use JAX's internal random number generator to pick positions for each particle.",
"N = 50\ndimension = 2\nbox_size = 6.8\n\nkey, split = random.split(key)\nR = random.uniform(split, (N,dimension), minval=0.0, maxval=box_size, dtype=f64) \n\nplot_system(R,box_size)",
"At this point, we could manually loop over all particle pairs and calculate the energy, keeping track of boundary conditions, etc. Fortunately, JAX MD has machinery to automate this. \nFirst, we must define two functions, displacement and shift, which contain all the information of the simulation box, boundary conditions, and underlying metric. displacement is used to calculate the vector displacement between particles, and shift is used to move particles. For most cases, it is recommended to use JAX MD's built in functions, which can be called using:\n* displacement, shift = space.free()\n* displacement, shift = space.periodic(box_size)\n* displacement, shift = space.periodic_general(T)\nFor demonstration purposes, we will define these manually for a square periodic box, though without proper error handling, etc. The following should have the same functionality as displacement, shift = space.periodic(box_size).",
"def setup_periodic_box(box_size):\n def displacement_fn(Ra, Rb, **unused_kwargs):\n dR = Ra - Rb\n return np.mod(dR + box_size * f32(0.5), box_size) - f32(0.5) * box_size\n\n def shift_fn(R, dR, **unused_kwargs):\n return np.mod(R + dR, box_size)\n\n return displacement_fn, shift_fn\n \ndisplacement, shift = setup_periodic_box(box_size)",
"We now set up a function to calculate the total energy of the system. The JAX MD function smap.pair takes a given potential and promotes it to act on all particle pairs in a system. smap.pair does not actually return an energy, rather it returns a function that can be used to calculate the energy. \nFor convenience and readability, we wrap smap.pair in a new function called harmonic_morse_pair. For now, ignore the species keyword, we will return to this later.",
"def harmonic_morse_pair(\n displacement_or_metric, species=None, D0=5.0, alpha=10.0, r0=1.0, k=50.0): \n D0 = np.array(D0, dtype=f32)\n alpha = np.array(alpha, dtype=f32)\n r0 = np.array(r0, dtype=f32)\n k = np.array(k, dtype=f32)\n return smap.pair(\n harmonic_morse,\n space.canonicalize_displacement_or_metric(displacement_or_metric),\n species=species,\n D0=D0,\n alpha=alpha,\n r0=r0,\n k=k)",
"Our helper function can be used to construct a function to compute the energy of the entire system as follows.",
"# Create a function to calculate the total energy with specified parameters\nenergy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0)\n\n# Use this to calculate the total energy\nprint(energy_fn(R))\n\n# Use grad to calculate the net force\nforce = -grad(energy_fn)(R)\nprint(force[:5])",
"We are now in a position to use our energy function to manipulate the system. As an example, we perform energy minimization using JAX MD's implementation of the FIRE algorithm. \nWe start by defining a function that takes an energy function, a set of initial positions, and a shift function and runs a specified number of steps of the minimization algorithm. The function returns the final set of positions and the maximum absolute value component of the force. We will use this function throughout this cookbook.",
"def run_minimization(energy_fn, R_init, shift, num_steps=5000):\n dt_start = 0.001\n dt_max = 0.004\n init,apply=minimize.fire_descent(jit(energy_fn),shift,dt_start=dt_start,dt_max=dt_max)\n apply = jit(apply)\n\n @jit\n def scan_fn(state, i):\n return apply(state), 0.\n\n state = init(R_init)\n state, _ = lax.scan(scan_fn,state,np.arange(num_steps))\n\n return state.position, np.amax(np.abs(-grad(energy_fn)(state.position)))",
"Now run the minimization with our custom energy function.",
"Rfinal, max_force_component = run_minimization(energy_fn, R, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\nplot_system( Rfinal, box_size )",
"Create a truncated potential\nIt is often desirable to have a potential that is strictly zero beyond a well-defined cutoff distance. In addition, MD simulations require the energy and force (i.e. first derivative) to be continuous. To easily modify an existing potential $V(r)$ to have this property, JAX MD follows the approach taken by HOOMD Blue. \nConsider the function \n\\begin{equation}\nS(r) =\n\\begin{cases}\n 1,& r<r_\\mathrm{on} \\\n \\frac{(r_\\mathrm{cut}^2-r^2)^2 (r_\\mathrm{cut}^2 + 2r^2 - 3 r_\\mathrm{on}^2)}{(r_\\mathrm{cut}^2-r_\\mathrm{on}^2)^3},& r_\\mathrm{on} \\leq r < r_\\mathrm{cut}\\\n 0,& r \\geq r_\\mathrm{cut}\n\\end{cases}\n\\end{equation}\nHere we plot both $S(r)$ and $\\frac{dS(r)}{dr}$, both of which are smooth and strictly zero above $r_\\mathrm{cut}$.",
"dr = np.arange(0,3,0.01)\nS = energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)(dr)\nngradS = vmap(grad(energy.multiplicative_isotropic_cutoff(lambda dr: 1, r_onset=1.5, r_cutoff=2.0)))(dr)\nplt.plot(dr,S,label=r'$S(r)$')\nplt.plot(dr,ngradS,label=r'$\\frac{dS(r)}{dr}$')\nplt.legend()\nformat_plot(r'$r$','')\nfinalize_plot()",
"We then use $S(r)$ to create a new function \n\\begin{equation}\\tilde V(r) = V(r) S(r),\n\\end{equation} \nwhich is exactly $V(r)$ below $r_\\mathrm{on}$, strictly zero above $r_\\mathrm{cut}$ and is continuous in its first derivative.\nThis is implemented in JAX MD through energy.multiplicative_isotropic_cutoff, which takes in a potential function $V(r)$ (e.g. our harmonic_morse function) and returns a new function $\\tilde V(r)$.",
"harmonic_morse_cutoff = energy.multiplicative_isotropic_cutoff(\n harmonic_morse, r_onset=1.5, r_cutoff=2.0)\n\ndr = np.arange(0,3,0.01)\nV = harmonic_morse(dr)\nV_cutoff = harmonic_morse_cutoff(dr)\nF = -vmap(grad(harmonic_morse))(dr)\nF_cutoff = -vmap(grad(harmonic_morse_cutoff))(dr)\nplt.plot(dr,V, label=r'$V(r)$')\nplt.plot(dr,V_cutoff, label=r'$\\tilde V(r)$')\nplt.plot(dr,F, label=r'$-\\frac{d}{dr} V(r)$')\nplt.plot(dr,F_cutoff, label=r'$-\\frac{d}{dr} \\tilde V(r)$')\nplt.legend()\nformat_plot('$r$', '')\nplt.ylim(-13,5)\nfinalize_plot()",
"As before, we can use smap.pair to promote this to act on an entire system.",
"def harmonic_morse_cutoff_pair(\n displacement_or_metric, D0=5.0, alpha=5.0, r0=1.0, k=50.0,\n r_onset=1.5, r_cutoff=2.0): \n D0 = np.array(D0, dtype=f32)\n alpha = np.array(alpha, dtype=f32)\n r0 = np.array(r0, dtype=f32)\n k = np.array(k, dtype=f32)\n return smap.pair(\n energy.multiplicative_isotropic_cutoff(\n harmonic_morse, r_onset=r_onset, r_cutoff=r_cutoff),\n space.canonicalize_displacement_or_metric(displacement_or_metric),\n D0=D0,\n alpha=alpha,\n r0=r0,\n k=k)",
"This is implemented as before",
"# Create a function to calculate the total energy\nenergy_fn = harmonic_morse_cutoff_pair(displacement, D0=5.0, alpha=10.0, r0=1.0, \n k=500.0, r_onset=1.5, r_cutoff=2.0)\n\n# Use this to calculate the total energy\nprint(energy_fn(R))\n\n# Use grad to calculate the net force\nforce = -grad(energy_fn)(R)\nprint(force[:5])\n\n# Minimize the energy using the FIRE algorithm\nRfinal, max_force_component = run_minimization(energy_fn, R, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\nplot_system( Rfinal, box_size )",
"Specifying parameters\nDynamic parameters\nIn the above examples, the strategy is to create a function energy_fn that takes a set of positions and calculates the energy of the system with all the parameters (e.g. D0, alpha, etc.) baked in. However, JAX MD allows you to override these baked-in values dynamically, i.e. when energy_fn is called. \nFor example, we can print out the minimized energy and force of the above system with the truncated potential:",
"print(energy_fn(Rfinal))\nprint(-grad(energy_fn)(Rfinal)[:5])",
"This uses the baked-in values of the 4 parameters: D0=5.0,alpha=10.0,r0=1.0,k=500.0. If, for example, we want to dynamically turn off the attractive part of the potential, we simply pass D0=0 to energy_fn:",
"print(energy_fn(Rfinal, D0=0))",
"Since changing the potential moves the minimum, the force will not be zero:",
"print(-grad(energy_fn)(Rfinal, D0=0)[:5])",
"This ability to dynamically pass parameters is very powerful. For example, if you want to shrink particles each step during a simulation, you can simply specify a different r0 each step. \nThis is demonstrated below, where we run a Brownian dynamics simulation at zero temperature with continuously decreasing r0. The details of simulate.brownian are beyond the scope of this cookbook, but the idea is that we pass a new value of r0 to the function apply each time it is called. The function apply takes a step of the simulation, and internally it passes any extra parameters like r0 to energy_fn.",
"def run_brownian(energy_fn, R_init, shift, key, num_steps):\n init, apply = simulate.brownian(energy_fn, shift, \n dt=0.00001, kT=0.0, gamma=0.1)\n apply = jit(apply)\n\n # Define how r0 changes for each step\n r0_initial = 1.0\n r0_final = .5\n def get_r0(t):\n return r0_final + (r0_initial-r0_final)*(num_steps-t)/num_steps\n\n @jit\n def scan_fn(state, t):\n # Dynamically pass r0 to apply, which passes it on to energy_fn\n return apply(state, r0=get_r0(t)), 0\n\n key, split = random.split(key)\n state = init(split, R_init)\n\n state, _ = lax.scan(scan_fn,state,np.arange(num_steps))\n return state.position, np.amax(np.abs(-grad(energy_fn)(state.position)))",
"If we use the previous result as the starting point for the Brownian Dynamics simulation, we find exactly what we would expect, the system contracts into a finite cluster, held together by the attractive part of the potential.",
"key, split = random.split(key)\nRfinal2, max_force_component = run_brownian(energy_fn, Rfinal, shift, split, \n num_steps=6000)\nplot_system( Rfinal2, box_size )",
"Particle-specific parameters\nOur example potential has 4 parameters: D0, alpha, r0, and k. The usual way to pass these parameters is as a scalar (e.g. D0=5.0), in which case that parameter is fixed for every particle pair. However, Python broadcasting allows for these parameters to be specified separately for every different particle pair by passing an $(N,N)$ array rather than a scalar. \nAs an example, let's do this for the parameter r0, which is an effective way of generating a system with continuous polydispersity in particle size. Note that the polydispersity disrupts the crystalline order after minimization.",
"# Draw the radii from a uniform distribution\nkey, split = random.split(key)\nradii = random.uniform(split, (N,), minval=1.0, maxval=2.0, dtype=f64)\n\n# Rescale to match the initial volume fraction\nradii = np.array([radii * np.sqrt(N/(4.*np.dot(radii,radii)))])\n\n# Turn this into a matrix of sums\nr0_matrix = radii+radii.transpose()\n\n# Create the energy function using r0_matrix\nenergy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=r0_matrix, \n k=500.0)\n\n# Minimize the energy using the FIRE algorithm\nRfinal, max_force_component = run_minimization(energy_fn, R, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\nplot_system( Rfinal, box_size )",
"In addition to standard Python broadcasting, JAX MD allows for the special case of additive parameters. If a parameter is passed as a (N,) array p_vector, JAX MD will convert this into a (N,N) array p_matrix where p_matrix[i,j] = 0.5 (p_vector[i] + p_vector[j]). This is a JAX MD specific ability and not a feature of Python broadcasting.\nAs it turns out, our above polydisperse example falls into this category. Therefore, we could achieve the same result by passing r0=2.0*radii.",
"# Create the energy function the radii array\nenergy_fn = harmonic_morse_pair(displacement, D0=5.0, alpha=10.0, r0=2.*radii, \n k=500.0)\n\n# Minimize the energy using the FIRE algorithm\nRfinal, max_force_component = run_minimization(energy_fn, R, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\nplot_system( Rfinal, box_size )",
"Species\nIt is often important to specify parameters differently for different particle pairs, but doing so with full ($N$,$N$) matrices is both inefficient and obnoxious. JAX MD allows users to create species, i.e. $N_s$ groups of particles that are identical to each other, so that parameters can be passed as much smaller ($N_s$,$N_s$) matrices.\nFirst, create an array that specifies which particles belong in which species. We will divide our system into two species.",
"N_0 = N // 2 # Half the particles in species 0\nN_1 = N - N_0 # The rest in species 1\nspecies = np.array([0] * N_0 + [1] * N_1, dtype=np.int32)\nprint(species)",
"Next, create the $(2,2)$ matrix of r0's, which are set so that the overall volume fraction matches our monodisperse case.",
"rsmall=0.41099747 # Match the total volume fraction\nrlarge=1.4*rsmall\nr0_species_matrix = np.array([[2*rsmall, rsmall+rlarge],\n [rsmall+rlarge, 2*rlarge]])\nprint(r0_species_matrix)\n\nenergy_fn = harmonic_morse_pair(displacement, species=species, D0=5.0, \n alpha=10.0, r0=r0_species_matrix, k=500.0)\n\nRfinal, max_force_component = run_minimization(energy_fn, R, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\n\nplot_system(Rfinal, box_size, species=species )",
"Dynamic Species\nJust like standard parameters, the species list can be passed dynamically as well. However, unlike standard parameters, you have to tell smap.pair that the species will be specified dynamically. To do this, set species=2 be the total number of types of particles when creating your energy function.\nThe following sets up an energy function where the attractive part of the interaction only exists between members of the first species, but where the species will be defined dynamically.",
"D0_species_matrix = np.array([[ 5.0, 0.0],\n [0.0, 0.0]])\n\nenergy_fn = harmonic_morse_pair(displacement, \n species=2, \n D0=D0_species_matrix, \n alpha=10.0,\n r0=0.5, \n k=500.0)",
"Now we set up a finite temperature Brownian Dynamics simulation where, at every step, particles on the left half of the simulation box are assigned to species 0, while particles on the right half are assigned to species 1.",
"def run_brownian(energy_fn, R_init, shift, key, num_steps):\n init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, kT=1.0, gamma=0.1)\n # apply = jit(apply)\n\n # Define a function to recalculate the species each step\n def get_species(R):\n return np.where(R[:,0] < box_size / 2, 0, 1)\n\n @jit\n def scan_fn(state, t):\n # Recalculate the species list\n species = get_species(state.position)\n # Dynamically pass species to apply, which passes it on to energy_fn\n return apply(state, species=species, species_count=2), 0\n\n key, split = random.split(key)\n state = init(split, R_init)\n\n state, _ = lax.scan(scan_fn,state,np.arange(num_steps))\n return state.position,np.amax(np.abs(-grad(energy_fn)(state.position,\n species=get_species(state.position), \n species_count=2)))",
"When we run this, we see that particles on the left side form clusters while particles on the right side do not.",
"key, split = random.split(key)\nRfinal, max_force_component = run_brownian(energy_fn, R, shift, split, num_steps=10000)\nplot_system( Rfinal, box_size )",
"Efficeiently calculating neighbors\nThe most computationally expensive part of most MD programs is calculating the force between all pairs of particles. Generically, this scales with $N^2$. However, for systems with isotropic pairwise interactions that are strictly zero beyond a cutoff, there are techniques to dramatically improve the efficiency. The two most common methods are cell list and neighbor lists.\nCell lists\nThe technique here is to divide space into small cells that are just larger than the largest interaction range in the system. Thus, if particle $i$ is in cell $c_i$ and particle $j$ is in cell $c_j$, $i$ and $j$ can only interact if $c_i$ and $c_j$ are neighboring cells. Rather than searching all $N^2$ combinations of particle pairs for non-zero interactions, you only have to search the particles in the neighboring cells. \nNeighbor lists\nHere, for each particle $i$, we make a list of potential neighbors: particles $j$ that are within some threshold distance $r_\\mathrm{threshold}$. If $r_\\mathrm{threshold} = r_\\mathrm{cutoff} + \\Delta r_\\mathrm{threshold}$ (where $r_\\mathrm{cutoff}$ is the largest interaction range in the system and $\\Delta r_\\mathrm{threshold}$ is an appropriately chosen buffer size), then all interacting particles will appear in this list as long as no particles moves by more than $\\Delta r_\\mathrm{threhsold}/2$. There is a tradeoff here: smaller $\\Delta r_\\mathrm{threhsold}$ means fewer particles to search over each MD step but the list must be recalculated more often, while larger $\\Delta r_\\mathrm{threhsold}$ means slower force calculates but less frequent neighbor list calculations. \nIn practice, the most efficient technique is often to use cell lists to calculate neighbor lists. In JAX MD, this occurs under the hood, and so only calls to neighbor-list functionality are necessary.\nTo implement neighbor lists, we need two functions: 1) a function to create and update the neighbor list, and 2) an energy function that uses a neighbor list rather than operating on all particle pairs. We create these functions with partition.neighbor_list and smap.pair_neighbor_list, respectively. \npartition.neighbor_list takes basic box information as well as the maximum interaction range r_cutoff and the buffer size dr_threshold.",
" def harmonic_morse_cutoff_neighbor_list(\n displacement_or_metric,\n box_size,\n species=None,\n D0=5.0, \n alpha=5.0, \n r0=1.0, \n k=50.0,\n r_onset=1.0,\n r_cutoff=1.5, \n dr_threshold=2.0,\n format=partition.OrderedSparse,\n **kwargs): \n\n D0 = np.array(D0, dtype=np.float32)\n alpha = np.array(alpha, dtype=np.float32)\n r0 = np.array(r0, dtype=np.float32)\n k = np.array(k, dtype=np.float32)\n r_onset = np.array(r_onset, dtype=np.float32)\n r_cutoff = np.array(r_cutoff, np.float32)\n dr_threshold = np.float32(dr_threshold)\n\n neighbor_fn = partition.neighbor_list(\n displacement_or_metric, \n box_size, \n r_cutoff, \n dr_threshold,\n format=format)\n\n energy_fn = smap.pair_neighbor_list(\n energy.multiplicative_isotropic_cutoff(harmonic_morse, r_onset, r_cutoff),\n space.canonicalize_displacement_or_metric(displacement_or_metric),\n species=species,\n D0=D0,\n alpha=alpha,\n r0=r0,\n k=k)\n\n return neighbor_fn, energy_fn",
"To test this, we generate our new neighbor_fn and energy_fn, as well as a comparison energy function using the default approach.",
"r_onset = 1.5\nr_cutoff = 2.0\ndr_threshold = 1.0\n\nneighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list(\n displacement, box_size, D0=5.0, alpha=10.0, r0=1.0, k=500.0,\n r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold)\n\nenergy_fn_comparison = harmonic_morse_cutoff_pair(\n displacement, D0=5.0, alpha=10.0, r0=1.0, k=500.0,\n r_onset=r_onset, r_cutoff=r_cutoff)",
"Next, we use neighbor_fn.allocate and the current set of positions to populate the neighbor list.",
"nbrs = neighbor_fn.allocate(R)",
"To calculate the energy, we pass nbrs to energy_fn. The energy matches the comparison.",
"print(energy_fn(R, neighbor=nbrs))\nprint(energy_fn_comparison(R))",
"Note that by default neighbor_fn uses a cell list internally to populate the neighbor list. This approach fails when the box size in any dimension is less than 3 times $r_\\mathrm{threhsold} = r_\\mathrm{cutoff} + \\Delta r_\\mathrm{threshold}$. In this case, neighbor_fn automatically turns off the use of cell lists, and instead searches over all particle pairs. This can also be done manually by passing disable_cell_list=True to partition.neighbor_list. This can be useful for debugging or for small systems where the overhead of cell lists outweighs the benefit. \nUpdating neighbor lists\nThe function neighbor_fn has two different usages, depending on how it is called. When used as above, i.e. nbrs = neighbor_fn(R), a new neighbor list is generated from scratch. Internally, JAX MD uses the given positions R to estimate a maximum capacity, i.e. the maximum number of neighbors any particle will have at any point during the use of the neighbor list. This estimate can be adjusted by passing a value of capacity_multiplier to partition.neighbor_list, which defaults to capacity_multiplier=1.25.\nSince the maximum capacity is not known ahead of time, this construction of the neighbor list cannot be compiled. However, once a neighbor list is created in this way, repopulating the list with the same maximum capacity is a simpler operation that can be compiled. This is done by calling nbrs = neighbor_fn(R, nbrs). Internally, this checks if any particle has moved more than $\\Delta r_\\mathrm{threshold}/2$ and, if so, recomputes the neighbor list. If the new neighbor list exceeds the maximum capacity for any particle, the boolean variable nbrs.did_buffer_overflow is set to True. \nThese two uses together allow for safe and efficient neighbor list calculations. The example below demonstrates a typical simulation loop that uses neighbor lists.",
"def run_brownian_neighbor_list(energy_fn, neighbor_fn, R_init, shift, key, num_steps):\n nbrs = neighbor_fn.allocate(R_init)\n\n init, apply = simulate.brownian(energy_fn, shift, dt=0.00001, kT=1.0, gamma=0.1)\n\n def body_fn(state, t):\n state, nbrs = state\n nbrs = nbrs.update(state.position)\n state = apply(state, neighbor=nbrs)\n return (state, nbrs), 0\n\n key, split = random.split(key)\n state = init(split, R_init)\n\n step = 0\n step_inc=100\n while step < num_steps/step_inc:\n rtn_state, _ = lax.scan(body_fn, (state, nbrs), np.arange(step_inc))\n new_state, nbrs = rtn_state\n # If the neighbor list overflowed, rebuild it and repeat part of \n # the simulation.\n if nbrs.did_buffer_overflow:\n print('Buffer overflow.')\n nbrs = neighbor_fn.allocate(state.position)\n else:\n state = new_state\n step += 1\n\n return state.position",
"To run this, we consider a much larger system than we have to this point. Warning: running this may take a few minutes.",
"Nlarge = 100*N\nbox_size_large = 10*box_size\ndisplacement_large, shift_large = setup_periodic_box(box_size_large)\n\nkey, split1, split2 = random.split(key,3)\nRlarge = random.uniform(split1, (Nlarge,dimension), minval=0.0, maxval=box_size_large, dtype=f64) \n\ndr_threshold = 1.5\nneighbor_fn, energy_fn = harmonic_morse_cutoff_neighbor_list(\n displacement_large, box_size_large, D0=5.0, alpha=10.0, r0=1.0, k=500.0,\n r_onset=r_onset, r_cutoff=r_cutoff, dr_threshold=dr_threshold)\nenergy_fn = jit(energy_fn)\n\nstart_time = time.process_time()\nRfinal = run_brownian_neighbor_list(energy_fn, neighbor_fn, Rlarge, shift_large, split2, num_steps=4000)\nend_time = time.process_time()\nprint('run time = {}'.format(end_time-start_time))\n\nplot_system( Rfinal, box_size_large, ms=2 )",
"Bonds\nBonds are a way of specifying potentials between specific pairs of particles that are \"on\" regardless of separation. For example, it is common to employ a two-sided spring potential between specific particle pairs, but JAX MD allows the user to specify arbitrary potentials with static or dynamic parameters. \nCreate and implement a bond potential\nWe start by creating a custom potential that corresponds to a bistable spring, taking the form\n\\begin{equation}\nV(r) = a_4(r-r_0)^4 - a_2(r-r_0)^2.\n\\end{equation}\n$V(r)$ has two minima, at $r = r_0 \\pm \\sqrt{\\frac{a_2}{2a_4}}$.",
"def bistable_spring(dr, r0=1.0, a2=2, a4=5, **kwargs):\n return a4*(dr-r0)**4 - a2*(dr-r0)**2",
"Plot $V(r)$",
"drs = np.arange(0,2,0.01)\nU = bistable_spring(drs)\nplt.plot(drs,U)\nformat_plot(r'$r$', r'$V(r)$')\nfinalize_plot()",
"The next step is to promote this function to act on a set of bonds. This is done via smap.bond, which takes our bistable_spring function, our displacement function, and a list of the bonds. It returns a function that calculates the energy for a given set of positions.",
"def bistable_spring_bond(\n displacement_or_metric, bond, bond_type=None, r0=1, a2=2, a4=5):\n \"\"\"Convenience wrapper to compute energy of particles bonded by springs.\"\"\"\n r0 = np.array(r0, f32)\n a2 = np.array(a2, f32)\n a4 = np.array(a4, f32)\n return smap.bond(\n bistable_spring,\n space.canonicalize_displacement_or_metric(displacement_or_metric),\n bond,\n bond_type,\n r0=r0,\n a2=a2,\n a4=a4)",
"However, in order to implement this, we need a list of bonds. We will do this by taking a system minimized under our original harmonic_morse potential:",
"R_temp, max_force_component = run_minimization(harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0), R, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\nplot_system( R_temp, box_size )",
"We now place a bond between all particle pairs that are separated by less than 1.3. calculate_bond_data returns a list of such bonds, as well as a list of the corresponding current length of each bond.",
"bonds, lengths = calculate_bond_data(displacement, R_temp, 1.3)\n\nprint(bonds[:5]) # list of particle index pairs that form bonds\nprint(lengths[:5]) # list of the current length of each bond",
"We use this length as the r0 parameter, meaning that initially each bond is at the unstable local maximum $r=r_0$.",
"bond_energy_fn = bistable_spring_bond(displacement, bonds, r0=lengths)",
"We now use our new bond_energy_fn to minimize the energy of the system. The expectation is that nearby particles should either move closer together or further apart, and the choice of which to do should be made collectively due to the constraint of constant volume. This is exactly what we see.",
"Rfinal, max_force_component = run_minimization(bond_energy_fn, R_temp, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\nplot_system( Rfinal, box_size )",
"Specifying bonds dynamically\nAs with species or parameters, bonds can be specified dynamically, i.e. when the energy function is called. Importantly, note that this does NOT override bonds that were specified statically in smap.bond.",
"# Specifying the bonds dynamically ADDS additional bonds. \n# Here, we dynamically pass the same bonds that were passed statically, which \n# has the effect of doubling the energy\nprint(bond_energy_fn(R))\nprint(bond_energy_fn(R,bonds=bonds, r0=lengths))",
"We won't go thorugh a further example as the implementation is exactly the same as specifying species or parameters dynamically, but the ability to employ bonds both statically and dynamically is a very powerful and general framework.\nCombining potentials\nMost JAX MD functionality (e.g. simulations, energy minimizations) relies on a function that calculates energy for a set of positions. Importantly, while this cookbook focus on simple and robust ways of defining such functions, JAX MD is not limited to these methods; users can implement energy functions however they like. \nAs an important example, here we consider the case where the energy includes both a pair potential and a bond potential. Specifically, we combine harmonic_morse_pair with bistable_spring_bond.",
"# Note, the code in the \"Bonds\" section must be run prior to this.\nenergy_fn = harmonic_morse_pair(displacement,D0=0.,alpha=10.0,r0=1.0,k=1.0)\nbond_energy_fn = bistable_spring_bond(displacement, bonds, r0=lengths)\ndef combined_energy_fn(R):\n return energy_fn(R) + bond_energy_fn(R)",
"Here, we have set $D_0=0$, so the pair potential is just a one-sided repulsive harmonic potential. For particles connected with a bond, this raises the energy of the \"contracted\" minimum relative to the \"extended\" minimum.",
"drs = np.arange(0,2,0.01)\nU = harmonic_morse(drs,D0=0.,alpha=10.0,r0=1.0,k=1.0)+bistable_spring(drs)\nplt.plot(drs,U)\nformat_plot(r'$r$', r'$V(r)$')\nfinalize_plot()",
"This new energy function can be passed to the minimization routine (or any other JAX MD simulation routine) in the usual way.",
"Rfinal, max_force_component = run_minimization(combined_energy_fn, R_temp, shift)\nprint('largest component of force after minimization = {}'.format(max_force_component))\nplot_system( Rfinal, box_size )",
"Specifying forces instead of energies\nSo far, we have defined functions that calculate the energy of the system, which we then pass to JAX MD. Internally, JAX MD uses automatic differentiation to convert these into functions that calculate forces, which are necessary to evolve a system under a given dynamics. However, JAX MD has the option to pass force functions directly, rather than energy functions. This creates additional flexibility because some forces cannot be represented as the gradient of a potential.\nAs a simple example, we create a custom force function that zeros out the force of some particles. During energy minimization, where there is no stochastic noise, this has the effect of fixing the position of these particles.\nFirst, we break the system up into two species, as before.",
"N_0 = N // 2 # Half the particles in species 0\nN_1 = N - N_0 # The rest in species 1\nspecies = np.array([0]*N_0 + [1]*N_1, dtype=np.int32)\nprint(species)",
"Next, we we creat our custom force function. Starting with our harmonic_morse pair potential, we calculate the force manually (i.e. using built-in automatic differentiation), and then multiply the force by the species id, which has the desired effect.",
"energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=1.0,k=500.0)\nforce_fn = quantity.force(energy_fn)\n\ndef custom_force_fn(R, **kwargs):\n return vmap(lambda a,b: a*b)(force_fn(R),species)",
"Running simulations with custom forces is as easy as passing this force function to the simulation.",
"def run_minimization_general(energy_or_force, R_init, shift, num_steps=5000):\n dt_start = 0.001\n dt_max = 0.004\n init,apply=minimize.fire_descent(jit(energy_or_force),shift,dt_start=dt_start,dt_max=dt_max)\n apply = jit(apply)\n\n @jit\n def scan_fn(state, i):\n return apply(state), 0.\n\n state = init(R_init)\n state, _ = lax.scan(scan_fn,state,np.arange(num_steps))\n\n return state.position, np.amax(np.abs(quantity.canonicalize_force(energy_or_force)(state.position)))",
"We run this as usual,",
"key, split = random.split(key)\nRfinal, _ = run_minimization_general(custom_force_fn, R, shift)\nplot_system( Rfinal, box_size, species )",
"After the above minimization, the blue particles have the same positions as they did initially:",
"plot_system( R, box_size, species )",
"Note, this method for fixing particles only works when there is no stochastic noise (e.g. in Langevin or Brownian dynamics) because such noise affects partices whether or not they have a net force. A safer way to fix particles is to create a custom shift function.\nCoupled ensembles\nFor a final example that demonstrates the flexibility within JAX MD, lets do something that is particularly difficult in most standard MD packages. We will create a \"coupled ensemble\" -- i.e. a set of two identical systems that are connected via a $Nd$ dimensional spring. An extension of this idea is used, for example, in the Doubly Nudged Elastic Band method for finding transition states. \nIf the \"normal\" energy of each system is \n\\begin{equation}\nU(R) = \\sum_{i,j} V( r_{ij} ),\n\\end{equation}\nwhere $r_{ij}$ is the distance between the $i$th and $j$th particles in $R$ and the $V(r)$ is a standard pair potential, and if the two sets of positions, $R_0$ and $R_1$, are coupled via the potential\n\\begin{equation}\nU_\\mathrm{spr}(R_0,R_1) = \\frac 12 k_\\mathrm{spr} \\left| R_1 - R_0 \\right|^2,\n\\end{equation}\nso that the total energy of the system is \n\\begin{equation}\nU_\\mathrm{total} = U(R_0) + U(R_1) + U_\\mathrm{spr}(R_0,R_1).\n\\end{equation}",
"energy_fn = harmonic_morse_pair(displacement,D0=5.0,alpha=10.0,r0=0.5,k=500.0)\ndef spring_energy_fn(Rall, k_spr=50.0, **kwargs):\n metric = vmap(space.canonicalize_displacement_or_metric(displacement), (0, 0), 0)\n dr = metric(Rall[0],Rall[1])\n return 0.5*k_spr*np.sum((dr)**2)\ndef total_energy_fn(Rall, **kwargs):\n return np.sum(vmap(energy_fn)(Rall)) + spring_energy_fn(Rall)",
"We now have to define a new shift function that can handle arrays of shape $(2,N,d)$. In addition, we make two copies of our initial positions R, one for each system.",
"def shift_all(Rall, dRall, **kwargs):\n return vmap(shift)(Rall, dRall)\nRall = np.array([R,R])",
"Now, all we have to do is pass our custom energy and shift functions, as well as the $(2,N,d)$ dimensional initial position, to JAX MD, and proceed as normal. \nAs a demonstration, we define a simple and general Brownian Dynamics simulation function, similar to the simulation routines above except without the special cases (e.g. chaning r0 or species).",
"def run_brownian_simple(energy_or_force, R_init, shift, key, num_steps):\n init, apply = simulate.brownian(energy_or_force, shift, dt=0.00001, kT=1.0, gamma=0.1)\n apply = jit(apply)\n\n @jit\n def scan_fn(state, t):\n return apply(state), 0\n\n key, split = random.split(key)\n state = init(split, R_init)\n\n state, _ = lax.scan(scan_fn, state, np.arange(num_steps))\n return state.position",
"Note that nowhere in this function is there any indication that we are simulating an ensemble of systems. This comes entirely form the inputs: i.e. the energy function, the shift function, and the set of initial positions.",
"key, split = random.split(key)\nRall_final = run_brownian_simple(total_energy_fn, Rall, shift_all, split, num_steps=10000)",
"The output also has shape $(2,N,d)$. If we display the results, we see that the two systems are in similar, but not identical, positions, showing that we have succeeded in simulating a coupled ensemble.",
"for Ri in Rall_final:\n plot_system( Ri, box_size )\nfinalize_plot((0.5,0.5))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PythonFreeCourse/Notebooks | week07/1_Classes.ipynb | mit | [
"<img src=\"images/logo.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.\">\n<span style=\"text-align: right; direction: rtl; float: right;\">מחלקות</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">הקדמה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בוקר חדש, השמש הפציעה והחלטתם שצברתם מספיק ידע בקורס כדי לפתוח רשת חברתית משלכם, בשם צ'יקצ'וק.<br>\n אתם משליכים את מחברות הפייתון מהחלון ומתחילים לתכנת במרץ את המערכת שתעזור לכם לנהל את המשתמשים.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לכל משתמש יש את התכונות הבאות:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>שם פרטי</li>\n <li>שם משפחה</li>\n <li>כינוי</li>\n <li>גיל</li>\n</ul>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בחרו סוג משתנה שיאפשר לכם לאחסן בנוחות את הנתונים הללו.<br>\n צרו שני משתמשים לדוגמה, והשתמשו בסוג המשתנה שבחרתם.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n לפני שנציג את פתרון השאלה, נעמיק מעט ברעיון הכללי שעומד מאחורי הדוגמה הזו.<br>\n <mark>כל משתמש שניצור הוא מעין אסופת תכונות</mark> – במקרה שלנו התכונות הן שם פרטי, שם משפחה, כינוי וגיל.<br>\n לכל תכונה יהיה ערך המתאים לה, ויחד הערכים הללו יצרו משתמש אחד.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נמצא עוד דוגמאות לאסופות תכונות שכאלו:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>תכונותיו של שולחן הן גובה, מספר רגליים, צבע, אורך ורוחב.</li>\n <li>תכונותיה של נורה הן צבע ומצב (דולקת או כבויה).</li>\n <li>תכונותיו של תרגיל בקורס הן השבוע והמחברת שבהן הוא הופיע, כותרת והוראות התרגיל.</li>\n <li>תכונותיו של שיר הן מילות השיר, האומנים שהשתתפו ביצירתו ואורכו.</li>\n</ul>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n חשבו על עוד 3 דוגמאות לעצמים שאפשר לתאר כערכים עם אסופת תכונות.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ניצור שני משתמשים לדוגמה לפי תכונותיהם שהוצגו לעיל:\n</p>",
"user1 = {\n 'first_name': 'Christine',\n 'last_name': 'Daaé',\n 'nickname': 'Little Lotte',\n 'age': 20,\n}\nuser2 = {\n 'first_name': 'Elphaba',\n 'last_name': 'Thropp',\n 'nickname': 'Elphie',\n 'age': 19,\n}",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n תוכלו ליצור בעצמכם פונקציה שיוצרת משתמש חדש?<br>\n זה לא מסובך מדי:\n</p>",
"def create_user(first_name, last_name, nickname, current_age):\n return {\n 'first_name': first_name,\n 'last_name': last_name,\n 'nickname': nickname,\n 'age': current_age,\n }\n\n\n# נקרא לפונקציה כדי לראות שהכל עובד כמצופה\nnew_user = create_user('Bayta', 'Darell', 'Bay', 24)\nprint(new_user)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <mark>נוכל גם לממש פונקציות שיעזרו לנו לבצע פעולות על כל אחד מהמשתמשים.</mark><br>\n לדוגמה: הפונקציה <var>describe_as_a_string</var> תקבל משתמש ותחזיר לנו מחרוזת שמתארת אותו,<br>\n והפונקציה <var>celeberate_birthday</var> תקבל משתמש ותגדיל את גילו ב־1:\n</p>",
"def describe_as_a_string(user):\n first_name = user['first_name']\n last_name = user['last_name']\n full_name = f'{first_name} {last_name}'\n nickname = user['nickname']\n age = user['age']\n return f'{nickname} ({full_name}) is {age} years old.'\n\n\ndef celebrate_birthday(user):\n user['age'] = user['age'] + 1\n\n\nprint(describe_as_a_string(new_user))\ncelebrate_birthday(new_user)\nprint(\"--- After birthday\")\nprint(describe_as_a_string(new_user))",
"<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/recall.svg\" style=\"height: 50px !important;\" alt=\"תזכורת\" title=\"תזכורת\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n הצלחנו לערוך את ערכו של <code>user['age']</code> מבלי להחזיר ערך, כיוון שמילונים הם mutable.<br>\n אם זה נראה לכם מוזר, חזרו למחברת על mutability ו־immutability.\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בשלב הזה בתוכניתנו קיימות קבוצת פונקציות שמטרתן היא ניהול של משתמשים ושל תכונותיהם.<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נוכל להוסיף למשתמש תכונות נוספות, כמו דוא\"ל ומשקל, לדוגמה,<br>\n או להוסיף לו פעולות שיהיה אפשר לבצע עליו, כמו הפעולה <var>eat_bourekas</var>, שמוסיפה לתכונת המשקל של המשתמש חצי קילו.<br>\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">חסרונות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אף על פי שהרעיון נחמד, ככל שנרבה להוסיף פעולות ותכונות, תגבר תחושת האי־סדר שאופפת את הקוד הזה.<br>\n קל לראות שהקוד שכתבנו מפוזר על פני פונקציות רבות בצורה לא מאורגנת.<br>\n במילים אחרות – אין אף מבנה בקוד שתחתיו מאוגדות כל הפונקציות והתכונות ששייכות לטיפול במשתמש.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הבעיה תצוף כשנרצה להוסיף לתוכנה שלנו עוד מבנים שכאלו.<br>\n לדוגמה, כשנרצה להוסיף לצ'יקצ'וק יכולת לניהול סרטונים – שתכונותיהם אורך סרטון ומספר לייקים, והפעולה עליהם היא היכולת לעשות Like לסרטון.<br>\n הקוד לניהול המשתמש והקוד לניהול הסרטונים עלולים להתערבב, יווצרו תלויות ביניהם וחוויית ההתמצאות בקוד תהפוך ללא נעימה בעליל.<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n החוסר באיגוד התכונות והפונקציות אף מקשה על הקורא להבין לאן שייכות כל אחת מהתכונות והפונקציות, ומה תפקידן בקוד.<br>\n מי שמסתכל על הקוד שלנו לא יכול להבין מייד ש־<var>describe_as_a_string</var> מיועדת לפעול רק על מבנים שנוצרו מ־<var>create_user</var>.<br>\n הוא עלול לנסות להכניס מבנים אחרים ולהקריס את התוכנית, או גרוע מכך – להיתקל בבאגים בעתיד, בעקבות שימוש לא נכון בפונקציה.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">הגדרה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n במהלך המחברת ראינו דוגמאות למבנים שהגדרנו <mark>כאוספים של תכונות ושל פעולות</mark>.<br>\n משתמש באפליקציית צ'יקצ'וק, לדוגמה, מורכב מהתכונות שם פרטי, שם משפחה, כינוי וגיל, ומהפעולות \"חגוג יום הולדת\" ו\"תאר כמחרוזת\".<br>\n נורה עשויה להיות מורכבת מהתכונות צבע ומצב (דולקת או לא), ומהפעולות \"הדלק נורה\" ו\"כבה נורה\".<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <dfn>מחלקה</dfn> היא דרך לתאר לפייתון אוסף כזה של תכונות ושל פעולות, ולאגד אותן תחת מבנה אחד.<br>\n אחרי שתיארנו בעזרת מחלקה אילו תכונות ופעולות מאפיינות עצם מסוים, נוכל להשתמש בה כדי לייצר כמה עצמים כאלו שנרצה.<br> \n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נדמיין מחלקה כמו שבלונה – <mark>תבנית</mark> שמתארת אילו תכונות ופעולות מאפיינות סוג עצם מסוים.<br>\n מחלקה שעוסקת במשתמשים, לדוגמה, תתאר עבור פייתון מאילו תכונות ופעולות מורכב כל משתמש.<br>\n</p>\n\n<figure>\n <img src=\"images/user_class.svg?v=1\" style=\"max-width: 650px; margin-right: auto; margin-left: auto; text-align: center;\" alt=\"במרכז התמונה ניצבת צללית של אדם (משתמש). בצד ימין שלו יש תיבה עם הכותרת 'תכונות', ובתוכה המילים 'שם פרטי', 'שם משפחה', 'כינוי' ו'גיל'. בצד שמאל שלו יש תיבה נוספת הנושאת את הכותרת 'פעולות', ובתוכה המילים 'חגוג יום הולדת' ו'תאר משתמש'.\"/>\n <figcaption style=\"margin-top: 2rem; text-align: center; direction: rtl;\">איור המתאר את התכונות ואת הפעולות השייכות למחלקה \"משתמש\".</figcaption>\n</figure>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בעזרת אותה מחלקת משתמשים (או שבלונת משתמשים, אם תרצו), נוכל ליצור משתמשים רבים.<br>\n כל משתמש שניצור באמצעות השבלונה ייקרא \"<dfn>מופע</dfn>\" (או <dfn>Instance</dfn>) – יחידה אחת, עצמאית, שמכילה את התכונות והפעולות שתיארנו.<br>\n אנחנו נשתמש במחלקה שוב ושוב כדי ליצור כמה משתמשים שנרצה, בדיוק כמו שנשתמש בשבלונה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n יש עוד הרבה מה להגיד והרבה מה להגדיר, אבל נשמע שמתחתי אתכם מספיק.<br>\n בואו ניגש לקוד!\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">יצירת מחלקות</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מחלקה בסיסית</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ראשית, ניצור את המחלקה הפשוטה ביותר שאנחנו יכולים לבנות, ונקרא לה <var>User</var>.<br>\n בהמשך המחברת נרחיב את המחלקה, והיא תהיה זו שמטפלת בכל הקשור במשתמשים של צ'יקצ'וק:\n</p>",
"class User:\n pass",
"<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/recall.svg\" style=\"height: 50px !important;\" alt=\"תזכורת\" title=\"תזכורת\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n ניסינו ליצור את המבנה הכי קצר שאפשר, אבל <code>class</code> חייב להכיל קוד.<br>\n כדי לעקוף את המגבלה הזו, השתמשנו במילת המפתח <code>pass</code>, שאומרת לפייתון \"אל תעשי כלום\".\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בקוד שלמעלה השתמשנו במילת המפתח <code>class</code> כדי להצהיר על מחלקה חדשה.<br>\n מייד לאחר מכן ציינו את שם המחלקה שאנחנו רוצים ליצור – <var>User</var> במקרה שלנו.<br>\n שם המחלקה נתון לחלוטין לבחירתנו, והמילה <var>User</var> לא אומרת לפייתון שום דבר מיוחד. באותה המידה יכולנו לבחור כל שם אחר.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הדבר שחשוב לזכור הוא שהמחלקה היא <em>לא</em> המשתמש עצמו, אלא רק השבלונה שלפיה פייתון תבנה את המשתמש.<br>\n אמנם כרגע המחלקה <var>User</var> ריקה ולא מתארת כלום, אבל פייתון עדיין תדע ליצור משתמש חדש אם נבקש ממנה לעשות זאת.<br>\n נבקש מהמחלקה ליצור עבורנו משתמש חדש. נקרא לה בשמה ונוסיף סוגריים, בדומה לקריאה לפונקציה:\n</p>",
"user1 = User()",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כעת יצרנו משתמש, ואנחנו יכולים לשנות את התכונות שלו.<br>\n מבחינה מילולית, נהוג להגיד שיצרנו <dfn>מופע</dfn> (<dfn>Instance</dfn>) או <dfn>עצם</dfn> (אובייקט, <dfn>Object</dfn>) מסוג <var>User</var>, ששמו <var>user1</var>.<br>\n השתמשנו לשם כך ב<dfn>מחלקה</dfn> בשם <var>User</var>.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נשנה את תכונות המשתמש.<br>\n כדי להתייחס לתכונה של מופע כלשהו בפייתון, נכתוב את שם המשתנה שמצביע למופע, נקודה, ואז שם התכונה.<br>\n אם נרצה לשנות את התכונה – נבצע אליה השמה:\n</p>",
"user1.first_name = \"Miles\"\nuser1.last_name = \"Prower\"\nuser1.age = 8\nuser1.nickname = \"Tails\"",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נוכל לאחזר את התכונות הללו בקלות, באותה הצורה:\n</p>",
"print(user1.age)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ואם נבדוק מה הסוג של המשתנה <var>user1</var>, מצפה לנו הפתעה נחמדה:\n</p>",
"type(user1)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n איזה יופי! המחלקה גרמה לכך ש־<var>User</var> הוא ממש סוג משתנה בפייתון עכשיו.<br>\n קחו לעצמכם רגע להתפעל – יצרנו סוג משתנה חדש בפייתון!<br>\n אם כך, המשתנה <var>user1</var> מצביע על מופע של משתמש, שסוגו <var>User</var>.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ננסה ליצור מופע נוסף, הפעם של משתמש אחר:\n</p>",
"user2 = User()\nuser2.first_name = \"Harry\"\nuser2.last_name = \"Potter\"\nuser2.age = 39\nuser2.nickname = \"BoyWhoLived1980\"",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ונשים לב ששני המופעים מתקיימים זה לצד זה, ולא דורסים את הערכים זה של זה:\n</p>",
"print(f\"{user1.first_name} {user1.last_name} is {user1.age} years old.\")\nprint(f\"{user2.first_name} {user2.last_name} is {user2.age} years old.\")",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n המצב הזה מתקיים כיוון שכל קריאה למחלקה <var>User</var> יוצרת מופע חדש של משתמש.<br>\n כל אחד מהמופעים הוא ישות נפרדת שמתקיימת בזכות עצמה.\n</p>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n צרו מחלקה בשם <var>Point</var> שמייצגת נקודה.<br>\n צרו 2 מופעים של נקודות: אחת בעלת <var>x</var> שערכו 3 ו־<var>y</var> שערכו 1, והשנייה בעלת <var>x</var> שערכו 4 ו־<var>y</var> שערכו 1.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n שמות מחלקה ייכתבו באות גדולה בתחילתם, כדי להבדילם מפונקציות וממשתנים רגילים.<br>\n אם שם המחלקה מורכב מכמה מילים, האות הראשונה בכל מילה תהא אות גדולה. בשם לא יופיעו קווים תחתונים.<br>\n לדוגמה, מחלקת <var>PopSong</var>.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מחלקה עם פעולות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n יצירת מחלקה ריקה זה נחמד, אבל זה לא מרגיש שעשינו צעד מספיק משמעותי כדי לשפר את איכות הקוד מתחילת המחברת.<br>\n לדוגמה, אם אנחנו רוצים להדפיס את הפרטים של משתמש מסוים, עדיין נצטרך לכתוב פונקציה כזו:\n</p>",
"def describe_as_a_string(user):\n full_name = f'{user.first_name} {user.last_name}'\n return f'{user.nickname} ({full_name}) is {user.age} years old.'\n\n\nprint(describe_as_a_string(user2))",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפונקציה עדיין מסתובבת לה חופשייה ולא מאוגדת תחת אף מבנה – וזה בדיוק המצב שניסינו למנוע.<br>\n למזלנו הפתרון לבעיית איגוד הקוד הוא פשוט. נוכל להדביק את קוד הפונקציה תחת המחלקה <code>User</code>:\n</p>",
"class User:\n def describe_as_a_string(user):\n full_name = f'{user.first_name} {user.last_name}'\n return f'{user.nickname} ({full_name}) is {user.age} years old.'\n\n\nuser3 = User()\nuser3.first_name = \"Anthony John\"\nuser3.last_name = \"Soprano\"\nuser3.age = 61\nuser3.nickname = \"Tony\"",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בתא שלמעלה הגדרנו את הפונקציה <var>describe_as_a_string</var> בתוך המחלקה <var>User</var>.<br>\n פונקציה שמוגדרת בתוך מחלקה נקראת <dfn>פעולה</dfn> (<dfn>Method</dfn>), שם שניתן לה כדי לבדל אותה מילולית מפונקציה רגילה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n למעשה, בתא שלמעלה הוספנו את הפעולה <var>describe_as_a_string</var> לשבלונה של המשתמש.<br>\n מעכשיו, כל מופע חדש של משתמש יוכל לקרוא לפעולה <var>describe_as_a_string</var> בצורה הבאה:\n</p>",
"user3.describe_as_a_string()",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n חדי העין שמו ודאי לב למשהו מעט משונה בקריאה לפעולה <var>describe_as_a_string</var>.<br>\n הפעולה מצפה לקבל פרמטר (קראנו לו <var>user</var>), אבל כשקראנו לה בתא האחרון לא העברנו לה אף ארגומנט!<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n זהו קסם ידוע ונחמד של מחלקות: כשמופע קורא לפעולה כלשהי – אותו מופע עצמו מועבר אוטומטית כארגומנט הראשון לפעולה.<br>\n לדוגמה, בקריאה <code dir=\"ltr\">user3.describe_as_a_string()</code>, המופע <var>user3</var> הועבר לתוך הפרמטר <var>user</var> של <var>describe_as_a_string</var>.<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n המוסכמה היא לקרוא תמיד לפרמטר הקסום הזה, זה שהולך לקבל את המופע, בשם <var>self</var>.<br>\n נשנה את ההגדרה שלנו בהתאם למוסכמה:\n</p>",
"class User:\n def describe_as_a_string(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'\n\n\nuser3 = User()\nuser3.first_name = \"Anthony John\"\nuser3.last_name = \"Soprano\"\nuser3.age = 61\nuser3.nickname = \"Tony\"\nuser3.describe_as_a_string()",
"<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n טעות נפוצה היא לשכוח לשים <var>self</var> כפרמטר הראשון בפעולות שנגדיר.\n </p>\n </div>\n</div>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n צרו פעולה בשם <var>describe_as_a_string</var> עבור מחלקת <var>Point</var> שיצרתם.<br>\n הפעולה תחזיר מחרוזת בצורת <samp dir=\"ltr\">(x, y)</samp>.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">יצירת מופע</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הפיסה החסרה בפאזל היא יצירת המופע.<br>\n אם נרצה ליצור משתמש חדש, עדיין נצטרך להציב בו תכונות אחת־אחת – וזה לא כזה כיף.<br>\n נשדרג את עצמנו ונכתוב פונקציה שקוראת ל־<var>User</var> ויוצרת מופע עם כל התכונות שלו:<br>\n</p>",
"def create_user(first_name, last_name, nickname, current_age):\n user = User()\n user.first_name = first_name\n user.last_name = last_name\n user.nickname = nickname\n user.age = current_age\n return user\n\n\nuser4 = create_user('Daenerys', 'Targaryen', 'Mhysa', 23)\nprint(f\"{user4.first_name} {user4.last_name} is {user4.age} years old.\")",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אבל הגדרה שכזו, כמו שכבר אמרנו, סותרת את כל הרעיון של מחלקות.<br>\n הרי המטרה של מחלקות היא קיבוץ כל מה שקשור בניהול התכונות והפעולות תחת המחלקה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נעתיק את <var>create_user</var> לתוך מחלקת <var>User</var>, בשינויים קלים: \n</p>\n<ol style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>לא נשכח לשים את <var>self</var> כפרמטר ראשון בחתימת הפעולה.</li>\n <li>כפי שראינו, פעולות במחלקה מקבלות מופע ועובדות ישירות עליו, ולכן נשמיט את השורות <code dir=\"ltr\">user = User()</code> ו־<code dir=\"ltr\">return user</code>.</li>\n</ol>",
"class User:\n def describe_as_a_string(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'\n\n def create_user(self, first_name, last_name, nickname, current_age):\n self.first_name = first_name\n self.last_name = last_name\n self.nickname = nickname\n self.age = current_age",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עכשיו נוכל ליצור משתמש חדש, בצורה החביבה והמקוצרת הבאה:\n</p>",
"user4 = User()\nuser4.create_user('Daenerys', 'Targaryen', 'Mhysa', 23)\nuser4.describe_as_a_string()",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תרגיל ביניים: מחלקת נקודות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מינרווה מקגונגל יצאה לבילוי לילי בסמטת דיאגון,<br>\n ואחרי לילה עמוס בשתיית שיכר בקלחת הרותחת, היא מעט מתקשה לחזור להוגוורטס.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הוסיפו את הפעולות <var>create_point</var> ו־<var>distance</var> למחלקת הנקודה שיצרתם.<br>\n הפעולה <var>create_point</var> תקבל כפרמטרים <var>x</var> ו־<var>y</var>, ותיצוק תוכן למופע שיצרתם.<br>\n הפעולה <var>distance</var> תחזיר את המרחק של מקגונגל מהוגוורטס, הממוקם בנקודה <span dir=\"ltr\">(0, 0)</span>.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נוסחת המרחק היא חיבור בין הערכים המוחלטים של נקודות ה־<var>x</var> וה־<var>y</var>.<br>\n לדוגמה:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>המרחק מהנקודה <pre dir=\"ltr\" style=\"display: inline; margin: 0 0.5em;\">x = 5, y = 3</pre> הוא <samp>8</samp>.</li>\n <li>המרחק מהנקודה <pre dir=\"ltr\" style=\"display: inline; margin: 0 0.5em;\">x = 0, y = 3</pre> הוא <samp>3</samp>.</li>\n <li>המרחק מהנקודה <pre dir=\"ltr\" style=\"display: inline; margin: 0 0.5em;\">x = -3, y = 3</pre> הוא <samp>6</samp>.</li>\n <li>המרחק מהנקודה <pre dir=\"ltr\" style=\"display: inline; margin: 0 0.5em;\">x = -5, y = 0</pre> הוא <samp>5</samp>.</li>\n <li>המרחק מהנקודה <pre dir=\"ltr\" style=\"display: inline; margin: 0 0.5em;\">x = 0, y = 0</pre> הוא <samp>0</samp>.</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ודאו שהתוכנית שלכם מחזירה <samp dir=\"ltr\">Success!</samp> עבור הקוד הבא:\n</p>",
"current_location = Point()\ncurrent_location.create_point(5, 3)\nif current_location.distance() == 8:\n print(\"Success!\")",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">פעולות קסם</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כדי להקל אפילו עוד יותר על המלאכה, בפייתון יש <dfn>פעולות קסם</dfn> (<dfn>Magic Methods</dfn>).<br>\n אלו פעולות עם שם מיוחד, שאם נגדיר אותן במחלקה, הן ישנו את ההתנהגות שלה או של המופעים הנוצרים בעזרתה.\n</p>\n\n<h4 style=\"text-align: right; direction: rtl; float: right; clear: both;\">הפעולה <code>__str__</code></h4>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נתחיל, לדוגמה, מהיכרות קצרה עם פעולת הקסם <code>__str__</code> (עם קו תחתון כפול, מימין ומשמאל לשם הפעולה).<br>\n אם ננסה סתם ככה להמיר למחרוזת את <var>user4</var> שיצרנו קודם לכן, נקבל בהלה והיסטריה:",
"user4 = User()\nuser4.create_user('Daenerys', 'Targaryen', 'Mhysa', 23)\nstr(user4)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n פייתון אמנם אומרת דברים נכונים, כמו שמדובר באובייקט (מופע) מהמחלקה <var>User</var> ואת הכתובת שלו בזיכרון, אבל זה לא באמת מועיל.<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כיוון שפונקציית ההדפסה <var>print</var>, מאחורי הקלעים, מבקשת את צורת המחרוזת של הארגומנט שמועבר אליה,<br>\n גם קריאה ל־<var>print</var> ישירות על <var>user4</var> תיצור את אותה תוצאה לא ססגונית:\n</p>",
"print(user4)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n המחלקה שלנו, כמובן, כבר ערוכה להתמודד עם המצב.<br>\n בזכות הפעולה <var>describe_as_a_string</var> שהגדרנו קודם לכן נוכל להדפיס את פרטי המשתמש בקלות יחסית:\n</p>",
"print(user4.describe_as_a_string())",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n אבל יש דרך קלה עוד יותר!<br>\n ניחשתם נכון – פעולת הקסם <code>__str__</code>.<br>\n נחליף את השם של הפעולה <var>describe_as_a_string</var>, ל־<code>__str__</code>:\n</p>",
"class User:\n def __str__(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'\n\n def create_user(self, first_name, last_name, nickname, current_age):\n self.first_name = first_name\n self.last_name = last_name\n self.nickname = nickname\n self.age = current_age\n\n\nuser5 = User()\nuser5.create_user('James', 'McNulty', 'Jimmy', 49)\nprint(user5)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ראו איזה קסם! עכשיו המרה של כל מופע מסוג <var>User</var> למחרוזת היא פעולה ממש פשוטה!<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בתא שלמעלה, הגדרנו את פעולת הקסם <code>__str__</code>.<br>\n הפעולה מקבלת כפרמטר את <var>self</var>, המופע שביקשנו להמיר למחרוזת,<br>\n ומחזירה לנו מחרוזת שאנחנו הגדרנו כמחרוזת שמתארת את המופע.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הגדרת פעולת הקסם <code>__str__</code> עבור מחלקה מסוימת מאפשרת לנו להמיר מופעים למחרוזות בצורה טבעית.\n</p>\n\n<h4 style=\"text-align: right; direction: rtl; float: right; clear: both;\">הפעולה <code>__init__</code></h4>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n פעולת קסם חשובה אף יותר, ואולי המפורסמת ביותר, נקראת <code>__init__</code>.<br>\n היא מאפשרת לנו להגדיר מה יקרה ברגע שניצור מופע חדש:\n</p>",
"class User:\n def __init__(self):\n print(\"New user has been created!\")\n\n def __str__(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'\n\n def create_user(self, first_name, last_name, nickname, current_age):\n self.first_name = first_name\n self.last_name = last_name\n self.nickname = nickname\n self.age = current_age\n\n\nuser5 = User()\nuser5.create_user('Lorne', 'Malvo', 'Mick', 23)\nprint(user5)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בדוגמת הקוד שלמעלה הגדרנו את פעולת הקסם <code>__init__</code>, שתרוץ מייד כשנוצר מופע חדש.<br>\n החלטנו שברגע שייווצר מופע של משתמש, תודפס ההודעה <samp dir=\"ltr\">New user has been created!</samp>.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הכיף הגדול ב־<code>__init__</code> הוא היכולת שלה לקבל פרמטרים.<br>\n נוכל להעביר אליה את הארגומנטים בקריאה לשם המחלקה, בעת יצירת המופע 🤯\n</p>",
"class User:\n def __init__(self, message):\n self.creation_message = message\n print(self.creation_message)\n\n def __str__(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'\n\n def create_user(self, first_name, last_name, nickname, current_age):\n self.first_name = first_name\n self.last_name = last_name\n self.nickname = nickname\n self.age = current_age\n\n\nuser5 = User(\"New user has been created!\") # תראו איזה מגניב\nuser5.create_user('Lorne', 'Malvo', 'Mick', 58)\nprint(user5)\nprint(f\"We still have the message: {user5.creation_message}\")",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בתא שלמעלה הגדרנו שפעולת הקסם <code>__init__</code> תקבל כפרמטר הודעה להדפסה.<br>\n ההודעה תישמר בתכונה <var>creation_message</var> השייכת למופע, ותודפס מייד לאחר מכן.<br>\n את ההודעה העברנו כארגומנט בעת הקריאה לשם המחלקה, <var>User</var>, שיוצרת את המופע.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ואם כבר יש לנו משהו שרץ כשאנחנו יוצרים את המופע... והוא יודע לקבל פרמטרים...<br>\n אתם חושבים על מה שאני חושב?<br>\n בואו נשנה את השם של <var>create_user</var> ל־<code>__init__</code>!<br>\n בצורה הזו נוכל לצקת את התכונות למופע מייד עם יצירתו, ולוותר על קריאה נפרדת לפעולה שמטרתה למלא את הערכים:\n</p>",
"class User:\n def __init__(self, first_name, last_name, nickname, current_age):\n self.first_name = first_name\n self.last_name = last_name\n self.nickname = nickname\n self.age = current_age\n print(\"Yayy! We have just created a new instance! :D\")\n\n def __str__(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'\n\n\nuser5 = User('Lorne', 'Malvo', 'Mick', 58)\nprint(user5)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n איגדנו את יצירת תכונות המופע תחת פעולה אחת, שרצה כשהוא נוצר.<br>\n הרעיון הנפלא הזה נפוץ מאוד בשפות תכנות שתומכות במחלקות, ומוכרת בשם <dfn>פעולת אתחול</dfn> (<dfn>Initialization Method</dfn>).<br>\n זו גם הסיבה לשם הפעולה – המילה init נגזרת מהמילה initialization, אתחול. \n</p>\n\n<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n שפצו את מחלקת הנקודה שיצרתם, כך שתכיל <code>__init__</code> ו־<code>__str__</code>.\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">ייצור מסחרי</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n צ'יקצ'וק שמה את ידה על פרטי המשתמשים של הרשת החברתית המתחרה, סניילצ'אט.<br>\n רשימת המשתמשים נראית כך:\n</p>",
"snailchat_users = [\n ['Mike', 'Shugarberg', 'Marker', 36],\n ['Hammer', 'Doorsoy', 'Tzweetz', 43],\n ['Evan', 'Spygirl', 'Odd', 30],\n]",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נניח, לכאורה בלבד, שאנחנו רוצים להעתיק את אותה רשימת משתמשים ולצרף אותה לרשת החברתית שלנו.<br>\n קחו דקה וחשבו איך הייתם עושים את זה.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n זכרו שקריאה למחלקה <var>User</var> היא ככל קריאה לפונקציה אחרת,<br>\n ושהמופע שחוזר ממנה הוא ערך בדיוק כמו כל ערך אחר.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נוכל ליצור רשימת מופעים של משתמשים. לדוגמה:\n</p>",
"our_users = []\nfor user_details in snailchat_users:\n new_user = User(*user_details) # Unpacking – התא הראשון עובר לפרמטר התואם, וכך גם השני, השלישי והרביעי\n our_users.append(new_user)\n print(new_user)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n בקוד שלמעלה יצרנו רשימה ריקה, שאותה נמלא במשתמשים <strike>שנגנוב</strike> שנשאיל מסניילצ'אט.<br>\n נעביר את הפרטים של כל אחד מהמשתמשים המופיעים ב־<var>snailchat_users</var>, ל־<code>__init__</code> של <var>User</var>,<br>\n ונצרף את המופע החדש שנוצר לתוך הרשימה החדשה שיצרנו.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n עכשיו הרשימה <var>our_users</var> היא רשימה לכל דבר, שכוללת את כל המשתמשים החדשים שהצטרפו לרשת החברתית שלנו:\n</p>",
"print(our_users[0])\nprint(our_users[1])\nprint(our_users[2])",
"<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl; clear: both;\">\n <div style=\"display: flex; width: 10%; float: right; clear: both;\">\n <img src=\"images/exercise.svg\" style=\"height: 50px !important;\" alt=\"תרגול\"> \n </div>\n <div style=\"width: 70%\">\n <p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n צרו את רשימת כל הנקודות שה־x וה־y שלהן הוא מספר שלם בין 0 ל־6.<br>\n לדוגמה, רשימת כל הנקודות שה־x וה־y שלהן הוא בין 0 ל־2 היא:<br>\n <samp dir=\"ltr\">[(0, 0), (0, 1), (1, 0), (1, 1), (0, 2), (1, 2), (2, 0), (2, 1), (2, 2)]</samp>\n </p>\n </div>\n <div style=\"display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;\">\n <p style=\"text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;\">\n <strong>חשוב!</strong><br>\n פתרו לפני שתמשיכו!\n </p>\n </div>\n</div>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">טעויות נפוצות</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">גבולות מרחב הערכים</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נסקור כמה דוגמאות כדי לוודא שבאמת הבנו כיצד מתנהגות מחלקות.<br>\n נגדיר את מחלקת <var>User</var> שאנחנו מכירים, ונצרף לה את הפעולה <var>celebrate_birthday</var>, שכזכור, מגדילה את גיל המשתמש ב־1:\n</p>",
"class User:\n def __init__(self, first_name, last_name, nickname, current_age):\n self.first_name = first_name\n self.last_name = last_name\n self.nickname = nickname\n self.age = current_age\n \n def celebrate_birthday(self):\n age = age + 1\n\n def __str__(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ניסיון ליצור מופע של משתמש ולחגוג לו יום הולדת יגרום לשגיאה.<br>\n תוכלו לנחש מה תהיה השגיאה עוד לפני שתריצו?\n</p>",
"user6 = User('Winston', 'Smith', 'Jeeves', 39)\nuser6.celebrate_birthday()",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n ניסינו לשנות את המשתנה <var>age</var> – אך הוא אינו מוגדר.<br>\n כדי לשנות את הגיל של המשתמש שיצרנו, נהיה חייבים להתייחס ל־<code>self.age</code>.<br>\n אם לא נציין במפורש שאנחנו רוצים לשנות את התכונה <var>age</var> ששייכת ל־<var>self</var>, פייתון לא תדע לאיזה מופע אנחנו מתכוונים.<br>\n נתקן:\n</p>",
"class User:\n def __init__(self, first_name, last_name, nickname, current_age):\n self.first_name = first_name\n self.last_name = last_name\n self.nickname = nickname\n self.age = current_age\n \n def celebrate_birthday(self):\n self.age = self.age + 1\n\n def __str__(self):\n full_name = f'{self.first_name} {self.last_name}'\n return f'{self.nickname} ({full_name}) is {self.age} years old.'\n\n\nuser6 = User('Winston', 'Smith', 'Jeeves', 39)\nprint(f\"User before birthday: {user6}\")\nuser6.celebrate_birthday()\nprint(f\"User after birthday: {user6}\")",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n באותה המידה, תכונות שהוגדרו כחלק ממופע לא מוגדרות מחוצה לו.<br>\n אפשר להשתמש, לדוגמה, בשם המשתנה <var>age</var> מבלי לחשוש לפגוע בתפקוד המחלקה או בתפקוד המופעים:\n</p>",
"user6 = User('Winston', 'Smith', 'Jeeves', 39)\nprint(user6)\nage = 10\nprint(user6)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כדי לשנות את גילו של המשתמש, נצטרך להתייחס אל התכונה שלו בצורת הכתיבה שלמדנו:\n</p>",
"user6.age = 10\nprint(user6)",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תכונה או פעולה שלא קיימות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n שגיאה שמתרחשת לא מעט היא פנייה לתכונה או לפעולה שלא קיימות עבור המופע.<br>\n לדוגמה:\n</p>",
"class Dice:\n def __init__(self, number):\n if 1 <= number <= 6:\n self.is_valid = True\n\n\ndice_bag = [Dice(roll_result) for roll_result in range(7)]",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n יצרנו רשימת קוביות וביצענו השמה כך ש־<var>dice_bag</var> תצביע עליה.<br>\n כעת נדפיס את התכונה <var>is_valid</var> של כל אחת מהקוביות:\n</p>",
"for dice in dice_bag:\n print(dice.is_valid)",
"<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n הבעיה היא שהקוביה הראשונה שיצרנו קיבלה את המספר 0.<br>\n במקרה כזה, התנאי בפעולת האתחול (<code>__init__</code>) לא יתקיים, והתכונה <var>is_valid</var> לא תוגדר.<br>\n כשהלולאה תגיע לקובייה 0 ותנסה לגשת לתכונה <var>is_valid</var>, נגלה שהיא לא קיימת עבור הקובייה 0, ונקבל <var>AttributeError</var>.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נתקן:\n</p>",
"class Dice:\n def __init__(self, number):\n self.is_valid = (1 <= number <= 6) # לא חייבים סוגריים\n\n\ndice_bag = [Dice(roll_result) for roll_result in range(7)]\nfor dice in dice_bag:\n print(dice.is_valid)",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">סיכום</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n במחברת זו רכשנו כלים לעבודה עם מחלקות ועצמים, ולייצוג אוספים של תכונות ופעולות.<br>\n כלים אלו יעזרו לנו לארגן טוב יותר את התוכנית שלנו ולייצג ישויות מהעולם האמיתי בצורה אינטואיטיבית יותר.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n נהוג לכנות את עולם המחלקות בשם \"<dfn>תכנות מונחה עצמים</dfn>\" (<dfn>Object Oriented Programming</dfn>, או <dfn>OOP</dfn>).<br>\n זו פרדיגמת תכנות הדוגלת ביצירת מחלקות לצורך חלוקת קוד טובה יותר,<br>\n ובתיאור עצמים מהעולם האמיתי בצורה טובה יותר, כאוספים של תכונות ופעולות.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n תכנות מונחה עצמים הוא פיתוח מאוחר יותר של פרדיגמת תכנות אחרת שאתם כבר מכירים, הנקראת \"<dfn>תכנות פרוצדורלי</dfn>\".<br>\n פרדיגמה זו דוגלת בחלוקת הקוד לתתי־תוכניות קטנות (מה שאתם מכירים כפונקציות), כדי ליצור קוד שמחולק טוב יותר וקל יותר לתחזוק.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n פייתון תומכת הן בתכנות פרוצדורלי והן בתכנות מונחה עצמים.\n</p>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">מונחים</span>\n<dl style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <dt>מחלקה (Class)</dt>\n <dd>\n תבנית, או שבלונה, שמתארת אוסף של תכונות ופעולות שיש ביניהן קשר.<br>\n המחלקה מגדירה מבנה שבעזרתו נוכל ליצור בקלות עצם מוגדר, שוב ושוב.<br>\n לדוגמה: מחלקה המתארת משתמש ברשת חברתית, מחלקה המתארת כלי רכב, מחלקה המתארת נקודה במישור.\n </dd>\n <dt>מופע (Instance)</dt>\n <dd>\n נקרא גם <dfn>עצם</dfn> (<dfn>Object</dfn>).<br>\n ערך שנוצר על ידי מחלקה כלשהי. סוג הערך ייקבע לפי המחלקה שיצרה אותו.<br>\n הערך נוצר לפי התבנית (\"השבלונה\") של המחלקה שממנה הוא נוצר, ומוצמדות לו הפעולות שהוגדרו במחלקה.<br>\n המופע הוא יחידה עצמאית שעומדת בפני עצמה. לרוב מחלקה תשמש אותנו ליצירת מופעים רבים.<br>\n לדוגמה: המופע \"נקודה שנמצאת ב־<span dir=\"ltr\">(5, 3)</span>\" יהיה מופע שנוצר מהמחלקה \"נקודה\". \n </dd>\n <dt>תכונה (Property, Member)</dt>\n <dd>\n ערך אופייני למופע שנוצר מהמחלקה.<br>\n משתנים השייכים למופע שנוצר מהמחלקה, ומכילים ערכים שמתארים אותו.<br>\n לדוגמה: לנקודה במישור יש ערך x וערך y. אלו 2 תכונות של הנקודה.<br>\n נוכל להחליט שתכונותיה של מחלקת מכונית יהיו צבע, דגם ויצרן.\n </dd>\n <dt>פעולה (Method)</dt>\n <dd>\n פונקציה שמוגדרת בגוף המחלקה.<br>\n מתארת התנהגויות אפשריות של המופע שייווצר מהמחלקה.<br>\n לדוגמה: פעולה על נקודה במישור יכולה להיות מציאת מרחקה מראשית הצירים.<br>\n פעולה על שולחן יכולה להיות \"קצץ 5 סנטימטר מגובהו\".\n </dd>\n <dt>שדה (Field, Attribute)</dt>\n <dd>\n שם כללי הנועד לתאר תכונה או פעולה.<br>\n שדות של מופע מסוים יהיו כלל התכונות והפעולות שאפשר לגשת אליהן מאותו מופע.<br>\n לדוגמה: השדות של נקודה יהיו התכונות x ו־y, והפעולה שבודקת את מרחקה מראשית הצירים.\n </dd>\n <dt>פעולה מיוחדת (Special Method)</dt>\n <dd>\n ידועה גם כ־<dfn>dunder method</dfn> (double under, קו תחתון כפול) או כ־<dfn>magic method</dfn> (פעולת קסם).<br>\n פעולה שהגדרתה במחלקה גורמת למחלקה או למופעים הנוצרים ממנה להתנהגות מיוחדת.<br>\n דוגמאות לפעולות שכאלו הן <code>__init__</code> ו־<code>__str__</code>.\n </dd>\n <dt>פעולת אתחול (Initialization Method)</dt>\n <dd>\n פעולה שרצה עם יצירת מופע חדש מתוך מחלקה.<br>\n לרוב משתמשים בפעולה זו כדי להזין במופע ערכים התחלתיים.\n </dd>\n <dt>תכנות מונחה עצמים (Object Oriented Programming)</dt>\n <dd>\n פרדיגמת תכנות שמשתמשת במחלקות בקוד ככלי העיקרי להפשטה של העולם האמיתי.<br>\n בפרדיגמה זו נהוג ליצור מחלקות המייצגות תבניות של עצמים, ולאפיין את העצמים באמצעות תכונות ופעולות.<br>\n בעזרת המחלקות אפשר ליצור מופעים, שהם ייצוג של פריט בודד (עצם, אובייקט) שנוצר לפי תבנית המחלקה.\n </dd>\n</dl>\n\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תרגיל לדוגמה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו מחלקה המייצגת נתיב תקין במערכת ההפעלה חלונות.<br>\n הנתיב מחולק לחלקים באמצעות התו / או התו \\.<br>\n החלק הראשון בנתיב הוא תמיד אות הכונן ואחריה נקודתיים.<br>\n החלקים שנמצאים אחרי החלק הראשון, ככל שיש כאלו, הם תיקיות וקבצים.<br>\n דוגמאות לנתיבים תקינים:\n</p>\n\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li><span dir=\"ltr\">C:\\Users\\Yam\\python.jpg</span></li>\n <li><span dir=\"ltr\">C:/Users/Yam/python.jpg</span></li>\n <li><span dir=\"ltr\">C:</span></li>\n <li><span dir=\"ltr\">C:\\</span></li>\n <li><span dir=\"ltr\">C:/</span></li>\n <li><span dir=\"ltr\">C:\\User/</span></li>\n <li><span dir=\"ltr\">D:/User/</span></li>\n <li><span dir=\"ltr\">C:/User</span></li>\n </ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n המחלקה תכלול את הפעולות הבאות:\n</p>\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>אחזר את אות הכונן בעזרת הפעולה <var>get_drive_letter</var>.</li>\n <li>אחזר את הנתיב ללא חלקו האחרון בעזרת הפעולה <var>get_dirname</var>.</li>\n <li>אחזר את שם החלק האחרון בנתיב, בעזרת הפעולה <var>get_basename</var>.</li>\n <li>אחזר את סיומת הקובץ בעזרת הפעולה <var>get_extension</var>.</li>\n <li>אחזר אם הנתיב קיים במחשב בעזרת הפעולה <var>is_exists</var>.</li>\n <li>אחזר את הנתיב כולו כמחרוזת, כשהתו המפריד הוא <samp>/</samp>, וללא <samp>/</samp> בסוף הנתיב.</li>\n</ul>",
"import os\n\n\nclass Path:\n def __init__(self, path):\n self.fullpath = path\n self.parts = list(self.get_parts())\n\n def get_parts(self):\n current_part = \"\"\n for char in self.fullpath:\n if char in r\"\\/\":\n yield current_part\n current_part = \"\"\n else:\n current_part = current_part + char\n if current_part != \"\":\n yield current_part\n\n def get_drive_letter(self):\n return self.parts[0].rstrip(\":\")\n\n def get_dirname(self):\n path = \"/\".join(self.parts[:-1])\n return Path(path)\n\n def get_basename(self):\n return self.parts[-1]\n\n def get_extension(self):\n name = self.get_basename()\n i = name.rfind('.')\n if 0 < i < len(name) - 1:\n return name[i + 1:]\n return ''\n\n def is_exists(self):\n return os.path.exists(str(self))\n\n def normalize_path(self):\n normalized = \"\\\\\".join(self.parts)\n return normalized.rstrip(\"\\\\\")\n\n def info_message(self):\n return f\"\"\"\n Some info about \"{self}\":\n Drive letter: {self.get_drive_letter()}\n Dirname: {self.get_dirname()}\n Last part of path: {self.get_basename()}\n File extension: {self.get_extension()}\n Is exists?: {self.is_exists()}\n \"\"\".strip()\n\n def __str__(self):\n return self.normalize_path()\n\n\nEXAMPLES = (\n r\"C:\\Users\\Yam\\python.jpg\",\n r\"C:/Users/Yam/python.jpg\",\n r\"C:\",\n r\"C:\\\\\",\n r\"C:/\",\n r\"C:\\Users/\",\n r\"D:/Users/\",\n r\"C:/Users\",\n)\nfor example in EXAMPLES:\n path = Path(example)\n print(path.info_message())\n print()",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תרגילים</span>\n<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">סקרנות</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n מחלקת המוצר בצ'יקצ'וק החליטה להוסיף פיצ'ר שמאפשר למשתמשים ליצור סקרים, וכרגיל כל העבודה נופלת עליכם.<br>\n כתבו מחלקה בשם <var>Poll</var> שמייצגת סקר.<br>\n פעולת האתחול של המחלקה תקבל כפרמטר את שאלת הסקר, וכפרמטר נוסף iterable עם כל אפשרויות ההצבעה לסקר.<br>\n כל אפשרות הצבעה בסקר מיוצגת על ידי מחרוזת.<br>\n המחלקה תכיל את הפעולות הבאות: \n</p>\n\n<ol style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li><var>vote</var> שמקבלת כפרמטר אפשרות הצבעה לסקר ומגדילה את מספר ההצבעות בו ב־1.</li>\n <li><var>add_option</var>, שמקבלת כפרמטר אפשרות הצבעה לסקר ומוסיפה אותה.</li>\n <li><var>remove_option</var> שמקבלת כפרמטר אפשרות הצבעה לסקר ומוחקת אותה.</li>\n <li><var>get_votes</var> המחזירה את כל האפשרויות כרשימה של tuple, המסודרים לפי כמות ההצבעות.<br>\n בכל tuple התא הראשון יהיה שם האפשרות בסקר, והתא השני יהיה מספר ההצבעות.</li>\n <li><var>get_winner</var> המחזירה את שם האפשרות שקיבלה את מרב ההצבעות.</li>\n</ol>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n במקרה של תיקו, החזירו מ־<var>get_winner</var> את אחת האפשרויות המובילות.<br>\n החזירו מהפעולות <var>vote</var>, <var>add_option</var> ו־<var>remove_option</var> את הערך <samp>True</samp> אם הפעולה עבדה כמצופה.<br> \n במקרה של הצבעה לאפשרות שאינה קיימת, מחיקת אפשרות שאינה קיימת או הוספת אפשרות שכבר קיימת, החזירו <samp>False</samp>.<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nודאו שהקוד הבא מדפיס רק <samp>True</samp> עבור התוכנית שכתבתם:\n</p>",
"def cast_multiple_votes(poll, votes):\n for vote in votes:\n poll.vote(vote)\n\n\nbridge_question = Poll('What is your favourite colour?', ['Blue', 'Yellow'])\ncast_multiple_votes(bridge_question, ['Blue', 'Blue', 'Yellow'])\nprint(bridge_question.get_winner() == 'Blue')\ncast_multiple_votes(bridge_question, ['Yellow', 'Yellow'])\nprint(bridge_question.get_winner() == 'Yellow')\nprint(bridge_question.get_votes() == [('Yellow', 3), ('Blue', 2)])\nbridge_question.remove_option('Yellow')\nprint(bridge_question.get_winner() == 'Blue')\nprint(bridge_question.get_votes() == [('Blue', 2)])\nbridge_question.add_option('Yellow')\nprint(bridge_question.get_votes() == [('Blue', 2), ('Yellow', 0)])\nprint(not bridge_question.add_option('Blue'))\nprint(bridge_question.get_votes() == [('Blue', 2), ('Yellow', 0)])",
"<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">משחקי הרעב</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n קטניס אוורדין הלכה לאיבוד באיזו זירה מעצבנת, ועכשיו היא מחפשת את הסניף הקרוב של אבו־חסן למנה משולשת ראויה.<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n צורת הזירה היא משולש שקודקודיו <span dir=\"ltr\">(0, 0)</span>, <span dir=\"ltr\">(2, 2)</span> ו־<span dir=\"ltr\">(4, 0)</span>.<br>\n קטניס מתחילה מאחד הקודקודים ומחליטה על הצעד הבא שלה כך:<br>\n\n</p>\n\n<ol style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>היא בוחרת אקראית באחד מקודקודי הזירה.</li>\n <li>היא הולכת מהמקום שבו היא נמצאת את מחצית הדרך עד לקודקוד שבחרה.</li>\n <li>היא מסמנת על המפה את הנקודה שהגיעה אליה.</li>\n</ol>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n כתבו פעולה בשם <var>plot_walks</var>, שמקבלת כפרמטר את מספר הצעדים של קטניס.<br>\n הפעולה תצייר מפת נקודות בגודל 4 על 4, שכל נקודה בה מציינת מקום שקטניס סימנה במפה שלה.<br>\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n השתמשו במנועי חיפוש כדי לקרוא על פעולות קסם שיכולות לעזור לכם, ועל מודולים לשרטוט גרפים.<br>\n שימו לב שנקודות יכולות להיות ממוקמות על x ו־y עשרוניים.\n</p>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jamesmarva/maths-with-python | 04-basic-plotting.ipynb | mit | [
"Plotting\nThere are many Python plotting libraries depending on your purpose. However, the standard general-purpose library is matplotlib. This is often used through its pyplot interface.",
"from matplotlib import pyplot\n%matplotlib inline",
"The command %matplotlib inline is not a Python command, but an IPython command. When using the console, or the notebook, it makes the plots appear inline. You do not want to use this in a plain Python code.",
"from math import sin, pi\n\nx = []\ny = []\nfor i in range(201):\n x.append(0.01*i)\n y.append(sin(pi*x[-1])**2)\n\npyplot.plot(x, y)\npyplot.show()",
"We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(<filename>)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot.\nThis plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot:",
"from math import sin, pi\n\nx = []\ny = []\nfor i in range(201):\n x.append(0.01*i)\n y.append(sin(pi*x[-1])**2)\n\npyplot.plot(x, y, marker='+', markersize=8, linestyle=':', \n linewidth=3, color='b', label=r'$\\sin^2(\\pi x)$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A basic plot')\npyplot.show()",
"Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be \"raw\": that backslash characters should be left alone. Then, special LaTeX commands have a backslash in front of them: here we use \\pi and \\sin. Most basic symbols can be easily guess (eg \\theta or \\int), but there are useful lists of symbols, and a reverse search site available. We can also use ^ to denote superscripts (used here), _ to denote subscripts, and use {} to group terms.\nBy combining these basic commands with other plotting types (semilogx and loglog, for example), most simple plots can be produced quickly.\nHere are some more examples:",
"from math import sin, pi, exp, log\n\nx = []\ny1 = []\ny2 = []\nfor i in range(201):\n x.append(1.0+0.01*i)\n y1.append(exp(sin(pi*x[-1])))\n y2.append(log(pi+x[-1]*sin(x[-1])))\n\npyplot.loglog(x, y1, linestyle='--', linewidth=4, \n color='k', label=r'$y_1=e^{\\sin(\\pi x)}$')\npyplot.loglog(x, y2, linestyle='-.', linewidth=4, \n color='r', label=r'$y_2=\\log(\\pi+x\\sin(x))$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A basic logarithmic plot')\npyplot.show()\n\nfrom math import sin, pi, exp, log\n\nx = []\ny1 = []\ny2 = []\nfor i in range(201):\n x.append(1.0+0.01*i)\n y1.append(exp(sin(pi*x[-1])))\n y2.append(log(pi+x[-1]*sin(x[-1])))\n\npyplot.semilogy(x, y1, linestyle='None', marker='o', \n color='g', label=r'$y_1=e^{\\sin(\\pi x)}$')\npyplot.semilogy(x, y2, linestyle='None', marker='^', \n color='r', label=r'$y_2=\\log(\\pi+x\\sin(x))$')\npyplot.legend(loc='lower right')\npyplot.xlabel(r'$x$')\npyplot.ylabel(r'$y$')\npyplot.title('A different logarithmic plot')\npyplot.show()",
"We will look at more complex plots later, but the matplotlib documentation contains a lot of details, and the gallery contains a lot of examples that can be adapted to fit. There is also an extremely useful document as part of Johansson's lectures on scientific Python.\nExercise: Logistic map\nThe logistic map builds a sequence of numbers ${ x_n }$ using the relation\n\\begin{equation}\n x_{n+1} = r x_n \\left( 1 - x_n \\right),\n\\end{equation}\nwhere $0 \\le x_0 \\le 1$.\nExercise 1\nWrite a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).\nExercise 2\nFix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$. Plot the last 100 members of the sequence in both cases.\nWhat does this suggest about the long-term behaviour of the sequence?\nExercise 3\nFix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).\nExercise 4\nFor iterative maps such as the logistic map, one of three things can occur:\n\nThe sequence settles down to a fixed point.\nThe sequence rotates through a finite number of values. This is called a limit cycle.\nThe sequence generates an infinite number of values. This is called deterministic chaos.\n\nUsing just your plot, or new plots from this data, work out approximate values of $r$ for which there is a transition from fixed points to limit cycles, from limit cycles of a given number of values to more values, and the transition to chaos.\nExercise: Mandelbrot\nThe Mandelbrot set is also generated from a sequence, ${ z_n }$, using the relation\n\\begin{equation}\n z_{n+1} = z_n^2 + c, \\qquad z_0 = 0.\n\\end{equation}\nThe members of the sequence, and the constant $c$, are all complex. The point in the complex plane at $c$ is in the Mandelbrot set only if the $|z_n| < 2$ for all members of the sequence. In reality, checking the first 100 iterations is sufficient.\nNote: the Python notation for a complex number $x + \\text{i} y$ is x + yj: that is, j is used to indicate $\\sqrt{-1}$. If you know the values of x and y then x + yj constructs a complex number; if they are stored in variables you can use complex(x, y).\nExercise 1\nWrite a function that checks if the point $c$ is in the Mandelbrot set.\nExercise 2\nCheck the points $c=0$ and $c=\\pm 2 \\pm 2 \\text{i}$ and ensure they do what you expect. (What should you expect?)\nExercise 3\nWrite a function that, given $N$\n\ngenerates an $N \\times N$ grid spanning $c = x + \\text{i} y$, for $-2 \\le x \\le 2$ and $-2 \\le y \\le 2$;\nreturns an $N\\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.\n\nExercise 4\nUsing the function imshow from matplotlib, plot the resulting array for a $100 \\times 100$ array to make sure you see the expected shape.\nExercise 5\nModify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using imshow again.\nExercise 6\nTry some higher resolution plots, and try plotting only a section to see the structure. Note this is not a good way to get high accuracy close up images!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ellisonbg/talk-2015 | 12-JupyterLab.ipynb | mit | [
"Building Blocks for Interactive Computing\nWhat are the building blocks for interactive computing?",
"%load_ext load_style\n%load_style images.css\nfrom IPython.display import display, Image",
"File browser",
"Image('images/lego-filebrowser.png', width='80%')",
"Terminal",
"Image('images/lego-terminal.png', width='80%')",
"Text editor (a place to type code)",
"Image('images/lego-texteditor.png', width='80%')",
"Output",
"Image('images/lego-output.png', width='80%')",
"Other building blocks\n\nKernels (processe for running code)\nPython\nR\nJulia\nScala\nDocument formats for storing code and results\nNotebook document format\nText files\nNarrative text\nDebugger\nProfiler\nVariable inspector\n\nNotebooks\nThe Jupyter Notebook is one way of assembling these building blocks as a linear sequence of input and output. There are other ways of assembling these building blocks:\n\n\nText editor hooked up to a kernel and output area\nMore traditional REPL\nDashboard with only output\n\n\nRecently, we worked with collaborators from IBM to perform a UX survey of Jupyter users. The executive summary can be read here.\nThis survey, along with many years of talking to users has lead us to the following vision:\n<div class=\"alert bg-primary\"> Jupyter needs to provide flexible building blocks for interactive computing that can be assembled and applied to different workflows </div>\n\nJupyterLab\nJupyterLab in the next generation user interface for project Jupyter that will provide this frame work for assembling these building blocks in different ways. It will ship alongside the existing notebook in version 5.0.\nJupyterLab is an IDE = Interactive Development Environment"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n | site/ko/tutorials/estimator/premade.ipynb | apache-2.0 | [
"Copyright 2019 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"사전 제작 Estimator\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/estimator/premade\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org에서 보기</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimator/premade.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab에서 실행하기</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/estimator/premade.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub에서소스 보기</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/estimator/premade.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">노트북 다운로드하기</a></td>\n</table>\n\n이 튜토리얼에서는 Estimator를 사용하여 TensorFlow에서 Iris 분류 문제를 해결하는 방법을 보여줍니다. Estimator는 완전한 모델을 TensorFlow에서 높은 수준으로 표현한 것이며, 간편한 크기 조정과 비동기식 훈련에 목적을 두고 설계되었습니다. 자세한 내용은 Estimator를 참조하세요.\nTensorFlow 2.0에서 Keras API는 이러한 작업을 상당 부분 동일하게 수행할 수 있으며 배우기 쉬운 API로 여겨집니다. 새로 시작하는 경우 Keras로 시작하는 것이 좋습니다. TensorFlow 2.0에서 사용 가능한 고급 API에 대한 자세한 정보는 Keras에 표준화를 참조하세요.\n시작을 위한 준비\n시작하려면 먼저 TensorFlow와 필요한 여러 라이브러리를 가져옵니다.",
"import tensorflow as tf\n\nimport pandas as pd",
"데이터세트\n이 문서의 샘플 프로그램은 아이리스 꽃을 꽃받침잎과 꽃잎의 크기에 따라 세 가지 종으로 분류하는 모델을 빌드하고 테스트합니다.\nIris 데이터세트를 사용하여 모델을 훈련합니다. Iris 데이터세트에는 네 가지 특성과 하나의 레이블이 있습니다. 이 네 가지 특성은 개별 아이리스 꽃의 다음과 같은 식물 특성을 식별합니다.\n\n꽃받침잎 길이\n꽃받침잎 너비\n꽃잎 길이\n꽃잎 너비\n\n이 정보를 바탕으로 데이터를 구문 분석하는 데 도움이 되는 몇 가지 상수를 정의할 수 있습니다.",
"CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']\nSPECIES = ['Setosa', 'Versicolor', 'Virginica']",
"그 다음, Keras 및 Pandas를 사용하여 Iris 데이터세트를 다운로드하고 구문 분석합니다. 훈련 및 테스트를 위해 별도의 데이터세트를 유지합니다.",
"train_path = tf.keras.utils.get_file(\n \"iris_training.csv\", \"https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv\")\ntest_path = tf.keras.utils.get_file(\n \"iris_test.csv\", \"https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv\")\n\ntrain = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)\ntest = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)",
"데이터를 검사하여 네 개의 float 특성 열과 하나의 int32 레이블이 있는지 확인할 수 있습니다.",
"train.head()",
"각 데이터세트에 대해 예측하도록 모델을 훈련할 레이블을 분할합니다.",
"train_y = train.pop('Species')\ntest_y = test.pop('Species')\n\n# The label column has now been removed from the features.\ntrain.head()",
"Estimator를 사용한 프로그래밍 개요\n이제 데이터가 설정되었으므로 TensorFlow Estimator를 사용하여 모델을 정의할 수 있습니다. Estimator는 tf.estimator.Estimator에서 파생된 임의의 클래스입니다. TensorFlow는 일반적인 ML 알고리즘을 구현하기 위해 tf.estimator(예: LinearRegressor) 모음을 제공합니다. 그 외에도 고유한 사용자 정의 Estimator를 작성할 수 있습니다. 처음 시작할 때는 사전 제작된 Estimator를 사용하는 것이 좋습니다.\n사전 제작된 Estimator를 기초로 TensorFlow 프로그램을 작성하려면 다음 작업을 수행해야 합니다.\n\n하나 이상의 입력 함수를 작성합니다.\n모델의 특성 열을 정의합니다.\n특성 열과 다양한 하이퍼 매개변수를 지정하여 Estimator를 인스턴스화합니다.\nEstimator 객체에서 하나 이상의 메서드를 호출하여 적합한 입력 함수를 데이터 소스로 전달합니다.\n\n이러한 작업이 Iris 분류를 위해 어떻게 구현되는지 알아보겠습니다.\n입력 함수 작성하기\n훈련, 평가 및 예측을 위한 데이터를 제공하려면 입력 함수를 작성해야 합니다.\n입력 함수는 다음 두 요소 튜플을 출력하는 tf.data.Dataset 객체를 반환하는 함수입니다.\n\nfeatures -다음과 같은 Python 사전:\n각 키가 특성의 이름입니다.\n각 값은 해당 특성 값을 모두 포함하는 배열입니다.\n\n\nlabel - 모든 예제의 레이블 값을 포함하는 배열입니다.\n\n입력 함수의 형식을 보여주기 위해 여기에 간단한 구현을 나타냈습니다.",
"def input_evaluation_set():\n features = {'SepalLength': np.array([6.4, 5.0]),\n 'SepalWidth': np.array([2.8, 2.3]),\n 'PetalLength': np.array([5.6, 3.3]),\n 'PetalWidth': np.array([2.2, 1.0])}\n labels = np.array([2, 1])\n return features, labels",
"입력 함수에서 원하는 대로 features 사전 및 label 목록이 생성되도록 할 수 있습니다. 그러나 모든 종류의 데이터를 구문 분석할 수 있는 TensorFlow의 Dataset API를 사용하는 것이 좋습니다.\nDataset API는 많은 일반적인 경우를 자동으로 처리할 수 있습니다. 예를 들어, Dataset API를 사용하면 대규모 파일 모음에서 레코드를 병렬로 쉽게 읽고 이를 단일 스트림으로 결합할 수 있습니다.\n이 예제에서는 작업을 단순화하기 위해 pandas 데이터를 로드하고 이 인메모리 데이터에서 입력 파이프라인을 빌드합니다.",
"def input_fn(features, labels, training=True, batch_size=256):\n \"\"\"An input function for training or evaluating\"\"\"\n # Convert the inputs to a Dataset.\n dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))\n\n # Shuffle and repeat if you are in training mode.\n if training:\n dataset = dataset.shuffle(1000).repeat()\n \n return dataset.batch(batch_size)\n",
"특성 열 정의하기\n특성 열은 모델이 특성 사전의 원시 입력 데이터를 사용하는 방식을 설명하는 객체입니다. Estimator 모델을 빌드할 때는 모델에서 사용할 각 특성을 설명하는 특성 열 목록을 전달합니다. tf.feature_column 모듈은 모델에 데이터를 나타내기 위한 많은 옵션을 제공합니다.\nIris의 경우 4개의 원시 특성은 숫자 값이므로, 네 개의 특성 각각을 32-bit 부동 소수점 값으로 나타내도록 Estimator 모델에 알려주는 특성 열 목록을 빌드합니다. 따라서 특성 열을 작성하는 코드는 다음과 같습니다.",
"# Feature columns describe how to use the input.\nmy_feature_columns = []\nfor key in train.keys():\n my_feature_columns.append(tf.feature_column.numeric_column(key=key))",
"특성 열은 여기에 표시된 것보다 훨씬 정교할 수 있습니다. 이 가이드에서 특성 열에 대한 자세한 내용을 읽을 수 있습니다.\n모델이 원시 특성을 나타내도록 할 방식에 대한 설명이 준비되었으므로 Estimator를 빌드할 수 있습니다.\nEstimator 인스턴스화하기\nIris 문제는 고전적인 분류 문제입니다. 다행히도 TensorFlow는 다음을 포함하여 여러 가지 사전 제작된 분류자 Estimator를 제공합니다.\n\n다중 클래스 분류를 수행하는 심층 모델을 위한 tf.estimator.DNNClassifier\n넓고 깊은 모델을 위한 tf.estimator.DNNLinearCombinedClassifier\n선형 모델에 기초한 분류자를 위한 tf.estimator.LinearClassifier\n\nIris 문제의 경우 tf.estimator.DNNClassifier가 최선의 선택인 것으로 여겨집니다. 이 Estimator를 인스턴스화하는 방법은 다음과 같습니다.",
"# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.\nclassifier = tf.estimator.DNNClassifier(\n feature_columns=my_feature_columns,\n # Two hidden layers of 30 and 10 nodes respectively.\n hidden_units=[30, 10],\n # The model must choose between 3 classes.\n n_classes=3)",
"훈련, 평가 및 예측하기\n이제 Estimator 객체가 준비되었으므로 메서드를 호출하여 다음을 수행할 수 있습니다.\n\n모델을 훈련합니다.\n훈련한 모델을 평가합니다.\n훈련한 모델을 사용하여 예측을 수행합니다.\n\n모델 훈련하기\n다음과 같이 Estimator의 train 메서드를 호출하여 모델을 훈련합니다.",
"# Train the Model.\nclassifier.train(\n input_fn=lambda: input_fn(train, train_y, training=True),\n steps=5000)",
"Estimator가 예상한 대로 인수를 사용하지 않는 입력 함수를 제공하면서 인수를 포착하기 위해 lambda에서 input_fn 호출을 래핑합니다. steps 인수는 여러 훈련 단계를 거친 후에 훈련을 중지하도록 메서드에 지시합니다.\n훈련한 모델 평가하기\n모델을 훈련했으므로 성능에 대한 통계를 얻을 수 있습니다. 다음 코드 블록은 테스트 데이터에서 훈련한 모델의 정확도를 평가합니다.",
"eval_result = classifier.evaluate(\n input_fn=lambda: input_fn(test, test_y, training=False))\n\nprint('\\nTest set accuracy: {accuracy:0.3f}\\n'.format(**eval_result))",
"train 메서드에 대한 호출과 달리 평가할 steps 인수를 전달하지 않았습니다. eval에 대한 input_fn은 단 하나의 데이터 epoch만 생성합니다.\neval_result 사전에는 average_loss(샘플당 평균 손실), loss(미니 배치당 평균 손실) 및 Estimator의 global_step 값(받은 훈련 반복 횟수)도 포함됩니다.\n훈련한 모델에서 예측(추론)하기\n우수한 평가 결과를 생성하는 훈련한 모델을 만들었습니다. 이제 훈련한 모델을 사용하여 레이블이 지정되지 않은 일부 측정을 바탕으로 아이리스 꽃의 종을 예측할 수 있습니다. 훈련 및 평가와 마찬가지로 단일 함수 호출을 사용하여 예측합니다.",
"# Generate predictions from the model\nexpected = ['Setosa', 'Versicolor', 'Virginica']\npredict_x = {\n 'SepalLength': [5.1, 5.9, 6.9],\n 'SepalWidth': [3.3, 3.0, 3.1],\n 'PetalLength': [1.7, 4.2, 5.4],\n 'PetalWidth': [0.5, 1.5, 2.1],\n}\n\ndef input_fn(features, batch_size=256):\n \"\"\"An input function for prediction.\"\"\"\n # Convert the inputs to a Dataset without labels.\n return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)\n\npredictions = classifier.predict(\n input_fn=lambda: input_fn(predict_x))",
"predict 메서드는 Python iterable을 반환하여 각 예제에 대한 예측 결과 사전을 생성합니다. 다음 코드는 몇 가지 예측과 해당 확률을 출력합니다.",
"for pred_dict, expec in zip(predictions, expected):\n class_id = pred_dict['class_ids'][0]\n probability = pred_dict['probabilities'][class_id]\n\n print('Prediction is \"{}\" ({:.1f}%), expected \"{}\"'.format(\n SPECIES[class_id], 100 * probability, expec))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
carthach/essentia | src/examples/tutorial/example_clickdetector.ipynb | agpl-3.0 | [
"ClickDetector use example\nThis algorithm detects the locations of impulsive noises (clicks and pops) on\nthe input audio frame. It relies on LPC coefficients to inverse-filter the\naudio in order to attenuate the stationary part and enhance the prediction\nerror (or excitation noise)[1]. After this, a matched filter is used to\nfurther enhance the impulsive peaks. The detection threshold is obtained from\na robust estimate of the excitation noise power [2] plus a parametric gain\nvalue.\nReferences:\n [1] Vaseghi, S. V., & Rayner, P. J. W. (1990). Detection and suppression of\n impulsive noise in speech communication systems. IEE Proceedings I\n (Communications, Speech and Vision), 137(1), 38-46.\n [2] Vaseghi, S. V. (2008). Advanced digital signal processing and noise\n reduction. John Wiley & Sons. Page 355",
"import essentia.standard as es\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import Audio \nfrom essentia import array as esarr\nplt.rcParams[\"figure.figsize\"] =(12,9)\n\ndef compute(x, frame_size=1024, hop_size=512, **kwargs):\n clickDetector = es.ClickDetector(frameSize=frame_size,\n hopSize=hop_size, \n **kwargs)\n ends = []\n starts = []\n for frame in es.FrameGenerator(x, frameSize=frame_size,\n hopSize=hop_size, startFromZero=True):\n frame_starts, frame_ends = clickDetector(frame)\n\n for s in frame_starts:\n starts.append(s)\n for e in frame_ends:\n ends.append(e)\n\n return starts, ends",
"Generating a click example\nLets start by degradating some audio files with some clicks of different amplitudes",
"fs = 44100.\n\naudio_dir = '../../audio/'\naudio = es.MonoLoader(filename='{}/{}'.format(audio_dir,\n 'recorded/vignesh.wav'),\n sampleRate=fs)()\n\noriginalLen = len(audio)\njumpLocation1 = int(originalLen / 4.)\njumpLocation2 = int(originalLen / 2.)\njumpLocation3 = int(originalLen * 3 / 4.)\n\naudio[jumpLocation1] += .5\naudio[jumpLocation2] += .15\naudio[jumpLocation3] += .05\n\ngroundTruth = esarr([jumpLocation1, jumpLocation2, jumpLocation3]) / fs\n\nfor point in groundTruth:\n l1 = plt.axvline(point, color='g', alpha=.5)\n\ntimes = np.linspace(0, len(audio) / fs, len(audio))\nplt.plot(times, audio)\n\nl1.set_label('Click locations')\nplt.legend()\nplt.title('Signal with artificial clicks of different amplitudes')",
"Lets listen to the clip to have an idea on how audible the clips are",
"Audio(audio, rate=fs)",
"The algorithm\nThis algorithm outputs the starts and ends timestapms of the clicks. The following plots show how the algorithm performs in the previous examples",
"starts, ends = compute(audio)\n\nfig, ax = plt.subplots(len(groundTruth))\nplt.subplots_adjust(hspace=.4)\nfor idx, point in enumerate(groundTruth):\n l1 = ax[idx].axvline(starts[idx], color='r', alpha=.5)\n ax[idx].axvline(ends[idx], color='r', alpha=.5)\n l2 = ax[idx].axvline(point, color='g', alpha=.5)\n ax[idx].plot(times, audio)\n ax[idx].set_xlim([point-.001, point+.001])\n ax[idx].set_title('Click located at {:.2f}s'.format(point))\n \n \n fig.legend((l1, l2), ('Detected click', 'Ground truth'), 'upper right')",
"The parameters\nthis is an explanation of the most relevant parameters of the algorithm\n\n\ndetectionThreshold. This algorithm features an adaptative threshold obtained from the instant power of each frame. This parameter is a gain factor to adjust the algorithm to different kinds of signals. Typically it should be increased for very \"noisy\" music as hard rock or electric music. The default value was empirically found to perform well in most of the cases. \n\n\npowerEstimationThreshold. After removing the auto-regressive part of the input frames through the LPC filter, the residual is used to compute the detection threshold. This signal is clipped to 'powerEstimationThreshold' times its\n median as a way to prevent the clicks to have a huge impact in the estimated threshold. This parameter controls how much the residual is clipped. \n\n\norder. The order for the LPC. As a rule of thumb, use 2 coefficients for each format on the input signal. However, it was empirically found that modelling more than 5 formats did not improve the clip detection on music.\n\n\nsilenceThreshold. Very low energy frames can have an unexpected shape. This frame can contain very small clicks that are detected by the algorithm but are impossible to hear. Thus, it is better to skip them with a silence threshold."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/asl-ml-immersion | notebooks/kubeflow_pipelines/pipelines/solutions/kfp_pipeline_vertex_automl_batch_predictions.ipynb | apache-2.0 | [
"Continuous Training with AutoML Vertex Pipelines with Batch Predictions\nLearning Objectives:\n1. Learn how to use Vertex AutoML pre-built components\n1. Learn how to build a Vertex AutoML pipeline with these components using BigQuery as a data source\n1. Learn how to compile, upload, and run the Vertex AutoML pipeline\n1. Serve batch predictions with BigQuery source from the AutoML pipeline\nIn this lab, you will build, deploy, and run a Vertex AutoML pipeline that orchestrates the Vertex AutoML AI services to train, tune, and serve batch predictions to BigQuery with a model. \nSetup",
"import os\n\nfrom google.cloud import aiplatform\n\nREGION = \"us-central1\"\nPROJECT = !(gcloud config get-value project)\nPROJECT = PROJECT[0]\n\nos.environ[\"PROJECT\"] = PROJECT\n\n# Set `PATH` to include the directory containing KFP CLI\nPATH = %env PATH\n%env PATH=/home/jupyter/.local/bin:{PATH}",
"BigQuery Data\nIf you have not gone through the KFP Walkthrough lab, you will need to run the following cell to create a BigQuery dataset and table containing the data required for this lab.\nNOTE If you already have the covertype data in a bigquery table at <PROJECT_ID>.covertype_dataset.covertype you may skip to Understanding the pipeline design.",
"%%bash\n\nDATASET_LOCATION=US\nDATASET_ID=covertype_dataset\nTABLE_ID=covertype\nDATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv\nSCHEMA=Elevation:INTEGER,\\\nAspect:INTEGER,\\\nSlope:INTEGER,\\\nHorizontal_Distance_To_Hydrology:INTEGER,\\\nVertical_Distance_To_Hydrology:INTEGER,\\\nHorizontal_Distance_To_Roadways:INTEGER,\\\nHillshade_9am:INTEGER,\\\nHillshade_Noon:INTEGER,\\\nHillshade_3pm:INTEGER,\\\nHorizontal_Distance_To_Fire_Points:INTEGER,\\\nWilderness_Area:STRING,\\\nSoil_Type:STRING,\\\nCover_Type:INTEGER\n\nbq --location=$DATASET_LOCATION --project_id=$PROJECT mk --dataset $DATASET_ID\n\nbq --project_id=$PROJECT --dataset_id=$DATASET_ID load \\\n--source_format=CSV \\\n--skip_leading_rows=1 \\\n--replace \\\n$TABLE_ID \\\n$DATA_SOURCE \\\n$SCHEMA",
"Understanding the pipeline design\nThe workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the pipeline_vertex/pipeline_vertex_automl_batch_preds.py file that we will generate below.\nThe pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.\nBuilding and deploying the pipeline\nLet us write the pipeline to disk:",
"%%writefile ./pipeline_vertex/pipeline_vertex_automl_batch_preds.py\n\"\"\"Kubeflow Covertype Pipeline.\"\"\"\n\nimport os\n\nfrom google_cloud_pipeline_components.aiplatform import (\n AutoMLTabularTrainingJobRunOp,\n TabularDatasetCreateOp,\n ModelBatchPredictOp\n)\nfrom kfp.v2 import dsl\n\nPIPELINE_ROOT = os.getenv(\"PIPELINE_ROOT\")\nPROJECT = os.getenv(\"PROJECT\")\nDATASET_SOURCE = os.getenv(\"DATASET_SOURCE\")\nPIPELINE_NAME = os.getenv(\"PIPELINE_NAME\", \"covertype\")\nDISPLAY_NAME = os.getenv(\"MODEL_DISPLAY_NAME\", PIPELINE_NAME)\nTARGET_COLUMN = os.getenv(\"TARGET_COLUMN\", \"Cover_Type\")\nBATCH_PREDS_SOURCE_URI = os.getenv(\"BATCH_PREDS_SOURCE_URI\")\n\[email protected](\n name=f\"{PIPELINE_NAME}-vertex-automl-pipeline-batch-preds\",\n description=f\"AutoML Vertex Pipeline for {PIPELINE_NAME}\",\n pipeline_root=PIPELINE_ROOT,\n)\ndef create_pipeline():\n\n dataset_create_task = TabularDatasetCreateOp(\n display_name=DISPLAY_NAME,\n bq_source=DATASET_SOURCE,\n project=PROJECT,\n )\n\n automl_training_task = AutoMLTabularTrainingJobRunOp(\n project=PROJECT,\n display_name=DISPLAY_NAME,\n optimization_prediction_type=\"classification\",\n dataset=dataset_create_task.outputs[\"dataset\"],\n target_column=TARGET_COLUMN,\n )\n\n batch_predict_op = ModelBatchPredictOp(\n project=PROJECT,\n job_display_name=\"batch_predict_job\",\n model=automl_training_task.outputs[\"model\"],\n bigquery_source_input_uri=BATCH_PREDS_SOURCE_URI,\n instances_format=\"bigquery\",\n predictions_format=\"bigquery\",\n bigquery_destination_output_uri=f'bq://{PROJECT}',\n )\n",
"Understanding the ModelBatchPredictOp\nWhen working with an AutoML Tabular model, the ModelBatchPredictOp can take the following inputs:\n* model: The model resource to serve batch predictions with\n* bigquery_source_uri: A URI to a BigQuery table containing examples to serve batch predictions on in the format bq://PROJECT.DATASET.TABLE\n* instances_format: \"bigquery\" to serve batch predictions on BigQuery data.\n* predictions_format: \"bigquery\" to store the results of the batch prediction in BigQuery.\n* bigquery_destination_output_uri: In the format bq://PROJECT_ID. This is the project that the results of the batch prediction will be stored. The ModelBatchPredictOp will create a dataset in this project.\nUpon completion of the ModelBatchPredictOp you will see a new BigQuery dataset with name prediction_<model-display-name>_<job-create-time>. Inside this dataset you will see a predictions table, containing the batch prediction examples and predicted labels. If there were any errors in the batch prediction, you will also see an errors table. The errors table contains rows for which the prediction has failed.\nCreate BigQuery table with data for batch predictions\nBefore we compile and run the pipeline, let's create a BigQuery table with data we want to serve batch predictions on. To simulate \"new\" data we will simply query the existing table for all columns except the label and create a table called newdata. The URI to this table will be the bigquery_source_input_uri input to the ModelBatchPredictOp.",
"%%bigquery\nCREATE OR REPLACE TABLE covertype_dataset.newdata AS \nSELECT * EXCEPT(Cover_Type)\nFROM covertype_dataset.covertype\nLIMIT 10000",
"Compile the pipeline\nLet's start by defining the environment variables that will be passed to the pipeline compiler:",
"ARTIFACT_STORE = f\"gs://{PROJECT}-kfp-artifact-store\"\nPIPELINE_ROOT = f\"{ARTIFACT_STORE}/pipeline\"\nDATASET_SOURCE = f\"bq://{PROJECT}.covertype_dataset.covertype\"\nBATCH_PREDS_SOURCE_URI = f\"bq://{PROJECT}.covertype_dataset.newdata\"\n\n%env PIPELINE_ROOT={PIPELINE_ROOT}\n%env PROJECT={PROJECT}\n%env REGION={REGION}\n%env DATASET_SOURCE={DATASET_SOURCE}\n%env BATCH_PREDS_SOURCE_URI={BATCH_PREDS_SOURCE_URI}",
"Let us make sure that the ARTIFACT_STORE has been created, and let us create it if not:",
"!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}",
"Use the CLI compiler to compile the pipeline\nWe compile the pipeline from the Python file we generated into a JSON description using the following command:",
"PIPELINE_JSON = \"covertype_automl_vertex_pipeline_batch_preds.json\"\n\n!dsl-compile-v2 --py pipeline_vertex/pipeline_vertex_automl_batch_preds.py --output $PIPELINE_JSON",
"Note: You can also use the Python SDK to compile the pipeline:\n```python\nfrom kfp.v2 import compiler\ncompiler.Compiler().compile(\n pipeline_func=create_pipeline, \n package_path=PIPELINE_JSON,\n)\n```\nThe result is the pipeline file.",
"!head {PIPELINE_JSON}",
"Deploy the pipeline package",
"aiplatform.init(project=PROJECT, location=REGION)\n\npipeline = aiplatform.PipelineJob(\n display_name=\"automl_covertype_kfp_pipeline_batch_predictions\",\n template_path=PIPELINE_JSON,\n enable_caching=True,\n)\n\npipeline.run()",
"Understanding the resources created by BatchPredictOp\nOnce the pipeline has finished running you will see a new BigQuery dataset with name prediction_<model-display-name>_<job-create-time>. Inside this dataset you will see a predictions table, containing the batch prediction examples and predicted labels. If there were any errors in the batch prediction, you will also see an errors table. The errors table contains rows for which the prediction has failed.\nCopyright 2021 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
msschwartz21/craniumPy | experiments/templates/TEMP-landmarks.ipynb | gpl-3.0 | [
"Introduction: Landmarks",
"import deltascope as ds\nimport deltascope.alignment as ut\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom sklearn.preprocessing import normalize\nfrom scipy.optimize import minimize\n\nimport os\nimport tqdm\nimport json\nimport time",
"Import raw data\nThe user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.",
"# --------------------------------\n# -------- User input ------------\n# --------------------------------\n\ndata = {\n # Specify sample type key\n 'wt': {\n # Specify path to data directory\n 'path': 'path\\to\\data\\directory\\sample1',\n # Specify which channels are in the directory and are of interest\n 'channels': ['AT','ZRF']\n },\n 'stype2': {\n 'path': 'path\\to\\data\\directory\\sample2',\n 'channels': ['AT','ZRF']\n }\n}",
"We'll generate a list of pairs of stypes and channels for ease of use.",
"data_pairs = []\nfor s in data.keys():\n for c in data[s]['channels']:\n data_pairs.append((s,c))",
"We can now read in all datafiles specified by the data dictionary above.",
"D = {}\nfor s in data.keys():\n D[s] = {}\n for c in data[s]['channels']:\n D[s][c] = ds.read_psi_to_dict(data[s]['path'],c)",
"Calculate landmark bins",
"# --------------------------------\n# -------- User input ------------\n# --------------------------------\n\n# Pick an integer value for bin number based on results above\nanum = 25\n\n# Specify the percentiles which will be used to calculate landmarks\npercbins = [50]",
"Calculate landmark bins based on user input parameters and the previously specified control sample.",
"lm = ds.landmarks(percbins=percbins, rnull=np.nan)\nlm.calc_bins(D[s_ctrl][c_ctrl], anum, theta_step)\n\nprint('Alpha bins')\nprint(lm.acbins)\nprint('Theta bins')\nprint(lm.tbins)",
"Calculate landmarks",
"lmdf = pd.DataFrame()\n\n# Loop through each pair of stype and channels\nfor s,c in tqdm.tqdm(data_pairs):\n print(s,c)\n # Calculate landmarks for each sample with this data pair\n for k,df in tqdm.tqdm(D[s][c].items()):\n lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf)\n \n# Set timestamp for saving data\ntstamp = time.strftime(\"%m-%d-%H-%M\",time.localtime())\n \n# Save completed landmarks to a csv file\nlmdf.to_csv(tstamp+'_landmarks.csv')\n\n# Save landmark bins to json file\nbins = {\n 'acbins':list(lm.acbins),\n 'tbins':list(lm.tbins)\n}\nwith open(tstamp+'_landmarks_bins.json', 'w') as outfile:\n json.dump(bins, outfile)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | dev/_downloads/1242d47b65d952f9f80cf19fb9e5d76e/35_eeg_no_mri.ipynb | bsd-3-clause | [
"%matplotlib inline",
"EEG forward operator with a template MRI\nThis tutorial explains how to compute the forward operator from EEG data\nusing the standard template MRI subject fsaverage.\n.. caution:: Source reconstruction without an individual T1 MRI from the\n subject will be less accurate. Do not over interpret\n activity locations which can be off by multiple centimeters.\nAdult template MRI (fsaverage)\nFirst we show how fsaverage can be used as a surrogate subject.",
"# Authors: Alexandre Gramfort <[email protected]>\n# Joan Massich <[email protected]>\n# Eric Larson <[email protected]>\n#\n# License: BSD-3-Clause\n\nimport os.path as op\nimport numpy as np\n\nimport mne\nfrom mne.datasets import eegbci\nfrom mne.datasets import fetch_fsaverage\n\n# Download fsaverage files\nfs_dir = fetch_fsaverage(verbose=True)\nsubjects_dir = op.dirname(fs_dir)\n\n# The files live in:\nsubject = 'fsaverage'\ntrans = 'fsaverage' # MNE has a built-in fsaverage transformation\nsrc = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')\nbem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')",
"Load the data\nWe use here EEG data from the BCI dataset.\n<div class=\"alert alert-info\"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages\n available in MNE-Python.</p></div>",
"raw_fname, = eegbci.load_data(subject=1, runs=[6])\nraw = mne.io.read_raw_edf(raw_fname, preload=True)\n\n# Clean channel names to be able to use a standard 1005 montage\nnew_names = dict(\n (ch_name,\n ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp'))\n for ch_name in raw.ch_names)\nraw.rename_channels(new_names)\n\n# Read and set the EEG electrode locations, which are already in fsaverage's\n# space (MNI space) for standard_1020:\nmontage = mne.channels.make_standard_montage('standard_1005')\nraw.set_montage(montage)\nraw.set_eeg_reference(projection=True) # needed for inverse modeling\n\n# Check that the locations of EEG electrodes is correct with respect to MRI\nmne.viz.plot_alignment(\n raw.info, src=src, eeg=['original', 'projected'], trans=trans,\n show_axes=True, mri_fiducials=True, dig='fiducials')",
"Setup source space and compute forward",
"fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,\n bem=bem, eeg=True, mindist=5.0, n_jobs=None)\nprint(fwd)",
"From here on, standard inverse imaging methods can be used!\nInfant MRI surrogates\nWe don't have a sample infant dataset for MNE, so let's fake a 10-20 one:",
"ch_names = \\\n 'Fz Cz Pz Oz Fp1 Fp2 F3 F4 F7 F8 C3 C4 T7 T8 P3 P4 P7 P8 O1 O2'.split()\ndata = np.random.RandomState(0).randn(len(ch_names), 1000)\ninfo = mne.create_info(ch_names, 1000., 'eeg')\nraw = mne.io.RawArray(data, info)",
"Get an infant MRI template\nTo use an infant head model for M/EEG data, you can use\n:func:mne.datasets.fetch_infant_template to download an infant template:",
"subject = mne.datasets.fetch_infant_template('6mo', subjects_dir, verbose=True)",
"It comes with several helpful built-in files, including a 10-20 montage\nin the MRI coordinate frame, which can be used to compute the\nMRI<->head transform trans:",
"fname_1020 = op.join(subjects_dir, subject, 'montages', '10-20-montage.fif')\nmon = mne.channels.read_dig_fif(fname_1020)\nmon.rename_channels(\n {f'EEG{ii:03d}': ch_name for ii, ch_name in enumerate(ch_names, 1)})\ntrans = mne.channels.compute_native_head_t(mon)\nraw.set_montage(mon)\nprint(trans)",
"There are also BEM and source spaces:",
"bem_dir = op.join(subjects_dir, subject, 'bem')\nfname_src = op.join(bem_dir, f'{subject}-oct-6-src.fif')\nsrc = mne.read_source_spaces(fname_src)\nprint(src)\nfname_bem = op.join(bem_dir, f'{subject}-5120-5120-5120-bem-sol.fif')\nbem = mne.read_bem_solution(fname_bem)",
"You can ensure everything is as expected by plotting the result:",
"fig = mne.viz.plot_alignment(\n raw.info, subject=subject, subjects_dir=subjects_dir, trans=trans,\n src=src, bem=bem, coord_frame='mri', mri_fiducials=True, show_axes=True,\n surfaces=('white', 'outer_skin', 'inner_skull', 'outer_skull'))\nmne.viz.set_3d_view(fig, 25, 70, focalpoint=[0, -0.005, 0.01])",
"From here, standard forward and inverse operators can be computed\nIf you have digitized head positions or MEG data, consider using\nmne coreg to warp a suitable infant template MRI to your\ndigitization information."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Centre-Alt-Rendiment-Esportiu/att | notebooks/Serial Ports.ipynb | gpl-3.0 | [
"<h1>Serial Ports</h1>\n<hr style=\"border: 1px solid #000;\">\n<span>\n<h2>Serial Port abstraction for ATT.</h2>\n</span>\n<br>\n<span>\nThis notebook shows the ATT Serial Port abstraction module.<br>\nThis module was created for enabling testing on ATT framework.\nThe Serial Port abstraction provides an Abstract base class so it can be extended and implement whatever kind of serial port we need.\nWe have used this class hierarchy to build some Mocks, in order to test the ATT framework.\n</span>\nSet modules path first:",
"import sys\n#sys.path.insert(0, '/home/asanso/workspace/att-spyder/att/src/python/')\nsys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')",
"The main abstract base class is the following one:\nclass SerialPort:\n metaclass = abc.ABCMeta\[email protected]\ndef isOpen(self):\n pass\n\[email protected]\ndef readline(self):\n pass\n\[email protected]\ndef close(self):\n pass\n\[email protected]\ndef get_port(self):\n return \"\"\n\[email protected]\ndef get_baudrate(self):\n return 0\n\nAs an example, we can see a dummy implementation:\nclass DummySerialPort (SerialPort):\n def init(self, port = None, baud = None):\n pass\ndef isOpen(self):\n return True\n\ndef close(self):\n pass\n\ndef get_port(self):\n return \"\"\n\ndef get_baudrate(self):\n return 0\n\ndef readline(self):\n time_delay = int(3*random.random())+1\n time.sleep(time_delay)\n return self.gen_random_line()\n\ndef gen_random_line(self):\n return \"Hee\"\n\n<h2>Building Serial Ports</h2>\n\n<span>\nIn order to build an instance of a SerialPort class, we have 2 options:\n<ul>\n<li>Call the constructor directly</li>\n<li>Use a Builder</li>\n</ul>\n</span>\n<h3>Calling the constructor</h3>",
"import hit.serial.serial_port\n\nport=\"\"\nbaud=0\ndummySerialPort = hit.serial.serial_port.DummySerialPort(port, baud)",
"<span>\nThe DummSerialPort is very simple. It just says \"Hee\" (after a few seconds) when its method \"readline()\" is called.<br>\nPort and Baud are useless here.\n</span>",
"print dummySerialPort.readline()",
"<span>\nLet's create a more interesting Serialport instance,\n</span>",
"import hit.serial.serial_port\n\nport=\"\"\nbaud=0\nemulatedSerialPort = hit.serial.serial_port.ATTEmulatedSerialPort(port, baud)",
"<span>\nThe ATTEmulatedSerialPort will emulate a real ATT serial port reading.<br>\nPort and Baud are useless here.\n</span>",
"print emulatedSerialPort.readline()",
"<h3>Using a Builder</h3>\n\n<span>\nLet's use a builder now.\n</span>\n<span>\nWe can choose the builder we want and build as many SerialPorts we want.\n</span>",
"import hit.serial.serial_port_builder\n\nbuilder = hit.serial.serial_port_builder.ATTEmulatedSerialPortBuilder()\n\nport=\"\"\nbaud=0\n\nemulatedSerialPort1 = builder.build_serial_port(port, baud)\nemulatedSerialPort2 = builder.build_serial_port(port, baud)\nemulatedSerialPort3 = builder.build_serial_port(port, baud)\nemulatedSerialPort4 = builder.build_serial_port(port, baud)\nemulatedSerialPort5 = builder.build_serial_port(port, baud)\nemulatedSerialPort6 = builder.build_serial_port(port, baud)\nemulatedSerialPort7 = builder.build_serial_port(port, baud)",
"<span>\nAnd call \"readline()\"\n</span>",
"print emulatedSerialPort5.readline()",
"<span>\nThere is a special Serial port abstraction that is fed from a file.<br>\nThis is useful when we want to \"mock\" the serial port and give it previously stored readings.\n</span>\n<span>\nThis is interesting, for example, in order to reproduce, or visualize the repetition of an interesting set of hits in a game. Because Serial line is Real-Time, there are situations where it is needed to provide the ATT framework with a set of know hits, previously stored.\n</span>\n<span>\nWe can use the data use in \"Train points importer\".\n</span>",
"!head -10 train_points_import_data/arduino_raw_data.txt\n\nimport hit.serial.serial_port_builder\n\nbuilder = hit.serial.serial_port_builder.ATTHitsFromFilePortBuilder()\n\nport=\"train_points_import_data/arduino_raw_data.txt\"\nbaud=0\n\nfileSerialPort = builder.build_serial_port(port, baud)",
"<span>\nAnd now we will read some lines:\n</span>",
"for i in range(20):\n print fileSerialPort.readline()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mdda/fossasia-2016_deep-learning | notebooks/2-CNN/5-TransferLearning/5-ImageClassifier-keras.ipynb | mit | [
"Re-Purposing a Pretrained Network\nSince a large CNN is very time-consuming to train (even on a GPU), and requires huge amounts of data, is there any way to use a pre-calculated one instead of retraining the whole thing from scratch?\nThis notebook shows how this can be done. And it works surprisingly well.\nHow do we classify images with untrained classes?\nThis notebook extracts a vector representation of a set of images using a CNN created by Google and pretrained on ImageNet. It then builds a 'simple SVM classifier', allowing new images can be classified directly. No retraining of the original CNN is required.",
"import os\n\nfrom tensorflow import keras # Works with TF 1.12\n#import keras\n\nimport numpy as np\nimport scipy\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport time\n\nCLASS_DIR='./images/cars'\n#CLASS_DIR='./images/seefood' # for HotDog vs NotHotDog",
"Use Keras Model Zoo",
"# https://www.tensorflow.org/api_docs/python/tf/keras/applications/\n#from tensorflow.keras.preprocessing import image as keras_preprocessing_image\nfrom tensorflow.keras.preprocessing import image as keras_preprocessing_image",
"Architecture Choices\n\nNASNet cell structure\n\nEnsure we have the model loaded",
"#from tensorflow.python.keras.applications.nasnet import NASNetLarge, preprocess_input\n#model = NASNetLarge(weights='imagenet', include_top=False) # 343,608,736\n\nfrom tensorflow.keras.applications.nasnet import NASNetMobile, preprocess_input, decode_predictions\n\nmodel_imagenet = NASNetMobile(weights='imagenet', include_top=True) # 24,226,656 bytes\nprint(\"Model Loaded\")",
"Build the model and select layers we need - the features are taken from the final network layer, before the softmax nonlinearity.",
"def image_to_input(model, img_path):\n target_size=model.input_shape[1:]\n img = keras_preprocessing_image.load_img(img_path, target_size=target_size)\n \n x = keras_preprocessing_image.img_to_array(img)\n x = np.expand_dims(x, axis=0)\n x = preprocess_input(x)\n\n return x\n\ndef get_single_prediction(img_path, top=5):\n x = image_to_input(model_imagenet, img_path)\n preds = model_imagenet.predict(x)\n predictions = decode_predictions(preds, top=top)\n return predictions[0]\n\nimg_path = './images/cat-with-tongue_224x224.jpg'\nim = plt.imread(img_path)\nplt.imshow(im)\nplt.show()\nfor t in get_single_prediction(img_path):\n print(\"%6.2f %s\" % (t[2],t[1],))\n\nimage_dir = './images/'\n\nimage_files = [ os.path.join(image_dir, f) for f in os.listdir(image_dir) \n if (f.lower().endswith('png') or f.lower().endswith('jpg')) and f!='logo.png' ]\n\nt0 = time.time()\nfor i, f in enumerate(image_files):\n im = plt.imread(f)\n if not (im.shape[0]==224 and im.shape[1]==224):\n continue\n \n plt.figure()\n plt.imshow(im.astype('uint8'))\n \n top5 = get_single_prediction(f)\n for n, (id,label,prob) in enumerate(top5):\n plt.text(350, 50 + n * 25, '{}. {}'.format(n+1, label), fontsize=14)\n plt.axis('off')\n \nprint(\"DONE : %6.2f seconds each\" %(float(time.time() - t0)/len(image_files),))\n\n#model_imagenet=None\n\nmodel_imagenet.summary()",
"Transfer Learning\nNow, we'll work with the layer 'just before' the final (ImageNet) classification layer.",
"#model_logits = NASNetMobile(weights='imagenet', include_top=False, pooling=None) # 19,993,200 bytes\n#logits_layer = model_imagenet.get_layer('global_average_pooling2d_1')\nlogits_layer = model_imagenet.get_layer('predictions')\nmodel_logits = keras.Model(inputs=model_imagenet.input, \n outputs=logits_layer.output)\nprint(\"Model Loaded\")",
"Use the Network to create 'features' for the training images\nNow go through the input images and feature-ize them at the 'logit level' according to the pretrained network.\n<!-- [Logits vs the softmax probabilities](images/presentation/softmax-layer-generic_676x327.png) !-->\n\n\nNB: The pretraining was done on ImageNet - there wasn't anything specific to the recognition task we're doing here.\nDisplay the network layout graph on TensorBoard\nThis isn't very informative, since the CNN graph is pretty complex...",
"#writer = tf.summary.FileWriter(logdir='../tensorflow.logdir/', graph=tf.get_default_graph())\n#writer.flush()",
"Handy cropping function",
"def crop_middle_square_area(np_image):\n h, w, _ = np_image.shape\n h = int(h/2)\n w = int(w/2)\n if h>w:\n return np_image[ h-w:h+w, : ]\n return np_image[ :, w-h:w+h ] \nim_sq = crop_middle_square_area(im)\nim_sq.shape\n\ndef get_logits_from_non_top(np_logits):\n # ~ average pooling\n #return np_logits[0].sum(axis=0).sum(axis=0)\n \n # ~ max-pooling\n return np_logits[0].max(axis=0).max(axis=0)",
"Use folder names to imply classes for Training Set",
"classes = sorted( [ d for d in os.listdir(CLASS_DIR) if os.path.isdir(os.path.join(CLASS_DIR, d)) ] )\nclasses # Sorted for for consistency\n\ntrain = dict(filepath=[], features=[], target=[])\n\nt0 = time.time()\n\nfor class_i, directory in enumerate(classes):\n for filename in os.listdir(os.path.join(CLASS_DIR, directory)):\n filepath = os.path.join(CLASS_DIR, directory, filename)\n if os.path.isdir(filepath): continue\n\n im = plt.imread(filepath)\n im_sq = crop_middle_square_area(im)\n\n x = image_to_input(model_logits, filepath)\n #np_logits = model_logits.predict(x) # Shape = 1x7x7x1056 if pooling=None\n #print(np_logits.shape)\n #np_logits_pooled = get_logits_from_non_top( np_logits )\n \n np_logits_pooled = model_logits.predict(x)[0] # Shape = 1x1056 if pooling=avg\n \n train['filepath'].append(filepath)\n train['features'].append(np_logits_pooled)\n train['target'].append( class_i )\n\n plt.figure()\n plt.imshow(im_sq.astype('uint8'))\n plt.axis('off')\n\n plt.text(2*320, 50, '{}'.format(filename), fontsize=14)\n plt.text(2*320, 80, 'Train as class \"{}\"'.format(directory), fontsize=12)\n\nprint(\"DONE : %6.2f seconds each\" %(float(time.time() - t0)/len(train),))",
"Build an SVM model over the features",
"from sklearn import svm\nclassifier = svm.LinearSVC()\nclassifier.fit(train['features'], train['target']) # learn from the data ",
"Use the SVM model to classify the test set",
"test_image_files = [f for f in os.listdir(CLASS_DIR) if not os.path.isdir(os.path.join(CLASS_DIR, f))]\n\nt0 = time.time()\nfor filename in sorted(test_image_files):\n filepath = os.path.join(CLASS_DIR, filename)\n im = plt.imread(filepath)\n im_sq = crop_middle_square_area(im)\n\n # This is two ops : one merely loads the image from numpy, \n # the other runs the network to get the class probabilities\n x = image_to_input(model_logits, filepath)\n #np_logits = model_logits.predict(x) # Shape = 1x7x7x1056\n #np_logits_pooled = get_logits_from_non_top( np_logits )\n \n np_logits_pooled = model_logits.predict(x)[0] # Shape = 1x1056\n\n prediction_i = classifier.predict([ np_logits_pooled ])\n decision = classifier.decision_function([ np_logits_pooled ])\n\n plt.figure()\n plt.imshow(im_sq.astype('uint8'))\n plt.axis('off')\n\n prediction = classes[ prediction_i[0] ]\n\n plt.text(2*320, 50, '{} : Distance from boundary = {:5.2f}'.format(prediction, decision[0]), fontsize=20)\n plt.text(2*320, 75, '{}'.format(filename), fontsize=14)\n\nprint(\"DONE : %6.2f seconds each\" %(float(time.time() - t0)/len(test_image_files),))",
"Exercise : Try your own ideas\nThe whole training regime here is based on the way the image directories are structured. So building your own example shouldn't be very difficult.\nSuppose you wanted to classify pianos into Upright and Grand : \n\nCreate a pianos directory and point the CLASS_DIR variable at it\nWithin the pianos directory, create subdirectories for each of the classes (i.e. Upright and Grand). The directory names will be used as the class labels\nInside the class directories, put a 'bunch' of positive examples of the respective classes - these can be images in any reasonable format, of any size (no smaller than 224x224).\nThe images will be automatically resized so that their smallest dimension is 224, and then a square 'crop' area taken from their centers (since ImageNet networks are typically tuned to answering on 224x224 images)\nTest images should be put in the pianos directory itelf (which is logical, since we don't know their classes yet)\n\nFinally, re-run everything - checking that the training images are read in correctly, that there are no errors along the way, and that (finally) the class predictions on the test set come out as expected.\nIf/when it works - please let everyone know : We can add that as an example for next time..."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
molgor/spystats | notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb | bsd-2-clause | [
"Here I'm process by chunks the entire region.",
"# Load Biospytial modules and etc.\n%matplotlib inline\nimport sys\nsys.path.append('/apps')\nimport django\ndjango.setup()\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n## Use the ggplot style\nplt.style.use('ggplot')\n\nfrom external_plugins.spystats import tools\n%run ../testvariogram.py\n\nsection.shape",
"Algorithm for processing Chunks\n\nMake a partition given the extent\nProduce a tuple (minx ,maxx,miny,maxy) for each element on the partition\nCalculate the semivariogram for each chunk and save it in a dataframe\nPlot Everything\nDo the same with a mMatern Kernel",
"minx,maxx,miny,maxy = getExtent(new_data)\n\nmaxy\n\n## If prefered a fixed number of chunks\nN = 100\nxp,dx = np.linspace(minx,maxx,N,retstep=True)\nyp,dy = np.linspace(miny,maxy,N,retstep=True)\n### Distance interval\nprint(dx)\nprint(dy)\n\n## Let's build the partition \n## If prefered a fixed size of chunk\nds = 300000 #step size (meters)\nxp = np.arange(minx,maxx,step=ds)\nyp = np.arange(miny,maxy,step=ds)\ndx = ds\ndy = ds\nN = len(xp)\n\n\nxx,yy = np.meshgrid(xp,yp)\n\nNx = xp.size\nNy = yp.size\n\n#coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(N) for j in range(N)]\n\ncoordinates_list = [ (xx[i][j],yy[i][j]) for i in range(Ny) for j in range(Nx)]\n\n\nfrom functools import partial\ntuples = map(lambda (x,y) : partial(getExtentFromPoint,x,y,step_sizex=dx,step_sizey=dy)(),coordinates_list)\n\nchunks = map(lambda (mx,Mx,my,My) : subselectDataFrameByCoordinates(new_data,'newLon','newLat',mx,Mx,my,My),tuples)\n\n## Here we can filter based on a threshold\nthreshold = 20\nchunks_non_empty = filter(lambda df : df.shape[0] > threshold ,chunks)\n\nlen(chunks_non_empty)\n\nlengths = pd.Series(map(lambda ch : ch.shape[0],chunks_non_empty))\n\nlengths.plot.hist()",
"For efficiency purposes we restrict to 10 variograms",
"smaller_list = chunks_non_empty[:10]\nvariograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=200000),smaller_list)\n\nvars = map(lambda v : v.calculateEmpirical(),variograms)\nvars = map(lambda v : v.calculateEnvelope(num_iterations=50),variograms)",
"Take an average of the empirical variograms also with the envelope.\nWe will use the group by directive on the field lags",
"envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1)\nenvhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1)\nvariogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1)\n\nlags = vars[0][['lags']]\n\nmeanlow = list(envslow.apply(lambda row : np.mean(row),axis=1))\nmeanhigh = list(envhigh.apply(np.mean,axis=1))\nmeanvariogram = list(variogram.apply(np.mean,axis=1))\nresults = pd.DataFrame({'meanvariogram':meanvariogram,'meanlow':meanlow,'meanhigh':meanhigh})\n\nresult_envelope = pd.concat([lags,results],axis=1)\n\nmeanvg = tools.Variogram(section,'residuals1')\n\nmeanvg.plot()\n\nmeanvg.envelope.columns\n\nresult_envelope.columns\n\nresult_envelope.columns = ['lags','envhigh','envlow','variogram']\n\nmeanvg.envelope = result_envelope\n\nmeanvg.plot(refresh=False)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
steinam/teacher | jup_notebooks/datenbanken/Sommer_2015.ipynb | mit | [
"Subselects",
"%load_ext sql\n\n\n%sql mysql://steinam:steinam@localhost/sommer_2015",
"Sommer 2015\nDatenmodell\n\nAufgabe\nErstellen Sie eine Abfrage, mit der Sie die Daten aller Kunden, die Anzahl deren Aufträge, die Anzahl der Fahrten und die Summe der Streckenkilometer erhalten. Die Ausgabe soll nach Kunden-PLZ absteigend sortiert sein.\n\nLösung",
"%%sql \n\n\n%sql select count(*) as AnzahlFahrten from fahrten",
"Warum geht kein Join ??\n```mysql\n```",
"%%sql \n\nselect k.kd_id, k.`kd_firma`, k.`kd_plz`, \n count(distinct a.Au_ID) as AnzAuftrag, \n count(distinct f.f_id) as AnzFahrt, \n sum(distinct ts.ts_strecke) as SumStrecke \nfrom kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` \n left join fahrten f on a.`au_id` = f.`f_au_id` \n left join teilstrecke ts on ts.`ts_f_id` = f.`f_id` \n group by k.kd_id order by k.`kd_plz`\n",
"Der Ansatz mit Join funktioniert in dieser Form nicht, da spätestens beim 2. Join die Firma Trappo mit 2 Datensätzen aus dem 1. Join verknüpft wird. Deshalb wird auch die Anzahl der Fahren verdoppelt. Dies wiederholt sich beim 3. Join.\nDie folgende Abfrage zeigt ohne die Aggregatfunktionen das jeweilige Ausgangsergebnis\nmysql\nselect k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id`\nfrom kunde k left join auftrag a\n on k.`kd_id` = a.`au_kd_id`\nleft join fahrten f\n on a.`au_id` = f.`f_au_id`\nleft join teilstrecke ts\n on ts.`ts_f_id` = f.`f_id`\norder by k.`kd_plz`",
"%sql select k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id` from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` left join fahrten f on a.`au_id` = f.`f_au_id` left join teilstrecke ts on ts.`ts_f_id` = f.`f_id` order by k.`kd_plz`",
"Winter 2015\nDatenmodell\n\nHinweis: In Rechnung gibt es zusätzlich ein Feld Rechnung.Kd_ID\nAufgabe\nErstellen Sie eine SQL-Abfrage, mit der alle Kunden wie folgt aufgelistet werden, bei denen eine Zahlungsbedingung mit einem Skontosatz größer 3 % ist, mit Ausgabe der Anzahl der hinterlegten Rechnungen aus dem Jahr 2015.\n\nLösung",
"%sql mysql://steinam:steinam@localhost/winter_2015",
"``mysql\nselect count(rechnung.Rg_ID), kunde.Kd_Namefrom rechnung inner join kunde\n onrechnung.Rg_KD_ID= kunde.Kd_IDinner joinzahlungsbedingungon kunde.Kd_Zb_ID=zahlungsbedingung.Zb_IDwherezahlungsbedingung.Zb_SkontoProzent> 3.0\n and year(rechnung.Rg_Datum) = 2015\ngroup by Kunde.Kd_Name`\n```",
"%%sql \nselect count(rechnung.`Rg_ID`), kunde.`Kd_Name` from rechnung \n inner join kunde on `rechnung`.`Rg_KD_ID` = kunde.`Kd_ID` \n inner join `zahlungsbedingung` on kunde.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` \n where `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0 \n and year(`rechnung`.`Rg_Datum`) = 2015 group by Kunde.`Kd_Name`",
"Es geht auch mit einem Subselect\n``mysql\n select kd.Kd_Name, \n (select COUNT(*) from Rechnung as R\n where R.Rg_KD_ID= KD.Kd_IDand year(R.Rg_Datum`) = 2015)\nfrom Kunde kd inner join `zahlungsbedingung` \non kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`\n\nand zahlungsbedingung.Zb_SkontoProzent > 3.0\n```",
"%%sql \nselect kd.`Kd_Name`, \n(select COUNT(*) from Rechnung as R \n where R.`Rg_KD_ID` = KD.`Kd_ID` and year(R.`Rg_Datum`) = 2015) as Anzahl\nfrom Kunde kd inner join `zahlungsbedingung` \n on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID` \n and `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0",
"Versicherung\nZeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden.\nDatenmodell Versicherung",
"%sql -- your code goes here",
"Lösung",
"%sql mysql://steinam:steinam@localhost/versicherung_complete\n\n%%sql \nselect min(`vv`.`Abschlussdatum`) as 'Erster Abschluss', `vv`.`Mitarbeiter_ID`\nfrom `versicherungsvertrag` vv inner join mitarbeiter m \n on vv.`Mitarbeiter_ID` = m.`ID`\nwhere vv.`Mitarbeiter_ID` in ( select m.`ID` from mitarbeiter m \n inner join Abteilung a\n on m.`Abteilung_ID` = a.`ID`) \ngroup by vv.`Mitarbeiter_ID`\n\nresult = _\n\nresult"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/cams/cmip6/models/sandbox-3/ocnbgchem.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: CAMS\nSource ID: SANDBOX-3\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:43\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cams', 'sandbox-3', 'ocnbgchem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\n3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\n4. Key Properties --> Transport Scheme\n5. Key Properties --> Boundary Forcing\n6. Key Properties --> Gas Exchange\n7. Key Properties --> Carbon Chemistry\n8. Tracers\n9. Tracers --> Ecosystem\n10. Tracers --> Ecosystem --> Phytoplankton\n11. Tracers --> Ecosystem --> Zooplankton\n12. Tracers --> Disolved Organic Matter\n13. Tracers --> Particules\n14. Tracers --> Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Elemental Stoichiometry\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n",
"1.5. Elemental Stoichiometry Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.7. Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Damping\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for passive tracers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"2.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for passive tracers (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime stepping framework for biology sources and sinks",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n",
"3.2. Timestep If Not From Ocean\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transport scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"4.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTransport scheme used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4.3. Use Different Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how atmospheric deposition is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n",
"5.2. River Input\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how river input is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n",
"5.3. Sediments From Boundary Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Sediments From Explicit Model\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList which sediments are speficied from explicit sediment model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.2. CO2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe CO2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.3. O2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs O2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.4. O2 Exchange Type\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nDescribe O2 gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. DMS Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs DMS gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.6. DMS Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify DMS gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.7. N2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.8. N2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.9. N2O Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs N2O gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.10. N2O Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify N2O gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.11. CFC11 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC11 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.12. CFC11 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.13. CFC12 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs CFC12 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.14. CFC12 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.15. SF6 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs SF6 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.16. SF6 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify SF6 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.17. 13CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.18. 13CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.19. 14CO2 Exchange Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.20. 14CO2 Exchange Type\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.21. Other Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any other gas exchange",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how carbon chemistry is modeled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n",
"7.2. PH Scale\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7.3. Constants If Not OMIP\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Sulfur Cycle Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sulfur cycle modeled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Nutrients Present\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Nitrous Species If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous species.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.5. Nitrous Processes If N\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf nitrogen present, list nitrous processes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Tracers --> Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Upper Trophic Levels Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDefine how upper trophic level are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Tracers --> Ecosystem --> Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of phytoplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n",
"10.2. Pft\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPhytoplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Tracers --> Ecosystem --> Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of zooplankton",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Size Classes\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nZooplankton size classes (if applicable)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Tracers --> Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there bacteria representation ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Lability\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Tracers --> Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Types If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Size If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n",
"13.4. Size If Discrete\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.5. Sinking Speed If Prognostic\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Tracers --> Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n",
"14.2. Abiotic Carbon\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs abiotic carbon modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.3. Alkalinity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is alkalinity modelled ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mvaz/osqf2015 | notebooks/DataPreparation.ipynb | mit | [
"Introduction\nSimply the first step to prepare the data for the following notebooks",
"import Quandl\nimport pandas as pd\nimport numpy as np\nimport blaze as bz",
"Data source is http://www.quandl.com.\nWe use blaze to store data.",
"with open('../.quandl_api_key.txt', 'r') as f:\n api_key = f.read()\n\ndb = Quandl.get(\"EOD/DB\", authtoken=api_key)\nbz.odo(db['Rate'].reset_index(), '../data/db.bcolz')\n\nfx = Quandl.get(\"CURRFX/EURUSD\", authtoken=api_key)\nbz.odo(fx['Rate'].reset_index(), '../data/eurusd.bcolz')",
"Can also migrate it to a sqlite database",
"bz.odo('../data/db.bcolz', 'sqlite:///osqf.db::db')\n\n%load_ext sql\n\n%%sql sqlite:///osqf.db\nselect * from db",
"Can perform queries",
"d = bz.Data('../data/db.bcolz')\nd.Close.max()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/migration/UJ14 legacy AutoML Vision Video Classification.ipynb | apache-2.0 | [
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n",
"AutoML SDK: AutoML image classification model\nInstallation\nInstall the latest (preview) version of AutoML SDK.",
"! pip3 install -U google-cloud-automl --user\n",
"Install the Google cloud-storage library as well.",
"! pip3 install google-cloud-storage\n",
"Restart the Kernel\nOnce you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.",
"import os\n\n\nif not os.getenv(\"AUTORUN\"):\n # Automatically restart kernel after installs\n import IPython\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)\n",
"Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the AutoML APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in AutoML Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID = \"[your-project-id]\" #@param {type:\"string\"}\n\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n\n! gcloud config set project $PROJECT_ID\n",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services",
"REGION = 'us-central1' #@param {type: \"string\"}\n",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")\n",
"Authenticate your GCP account\nIf you are using AutoML Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.",
"import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on AutoML, then don't execute this code\nif not os.path.exists('/opt/deeplearning/metadata/env_version'):\n if 'google.colab' in sys.modules:\n from google.colab import auth as google_auth\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login\n",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.",
"BUCKET_NAME = \"[your-bucket-name]\" #@param {type:\"string\"}\n\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP\n",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION gs://$BUCKET_NAME\n",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al gs://$BUCKET_NAME\n",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport AutoML SDK\nImport the AutoML SDK into our Python environment.",
"import json\nimport os\nimport sys\nimport time\n\n\nfrom google.cloud import automl_v1beta1 as automl\n\n\nfrom google.protobuf.json_format import MessageToJson\nfrom google.protobuf.json_format import ParseDict\n",
"AutoML constants\nSetup up the following constants for AutoML:\n\nPARENT: The AutoML location root path for dataset, model and endpoint resources.",
"# AutoML location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION\n",
"Clients\nThe AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).\nYou will use several clients in this tutorial, so set them all up upfront.\n(?)",
"def automl_client():\n return automl.AutoMlClient()\n\ndef prediction_client():\n return automl.PredictionServiceClient()\n\ndef operations_client():\n return automl.AutoMlClient()._transport.operations_client\n\nclients = {}\nclients[\"automl\"] = automl_client()\nclients[\"prediction\"] = prediction_client()\nclients[\"operations\"] = operations_client()\n\nfor client in clients.items():\n print(client)\n\n\nIMPORT_FILE = 'gs://automl-video-demo-data/hmdb_split1.csv'\n\n\n! gsutil cat $IMPORT_FILE | head -n 10 \n",
"Example output:\nTRAIN,gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv\nTEST,gs://automl-video-demo-data/hmdb_split1_5classes_test_inf.csv\nCreate a dataset\nprojects.locations.datasets.create\nRequest",
"dataset = {\n \"display_name\": \"hmdb_\" + TIMESTAMP,\n \"video_classification_dataset_metadata\": {}\n}\n\nprint(MessageToJson(\n automl.CreateDatasetRequest(\n parent=PARENT,\n dataset=dataset\n ).__dict__[\"_pb\"])\n)\n",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"dataset\": {\n \"displayName\": \"hmdb_20210228225744\",\n \"videoClassificationDatasetMetadata\": {}\n }\n}\nCall",
"request = clients[\"automl\"].create_dataset(\n parent=PARENT,\n dataset=dataset\n)\n",
"Response",
"result = request\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))\n",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/datasets/VCN6574174086275006464\",\n \"displayName\": \"hmdb_20210228225744\",\n \"createTime\": \"2021-02-28T23:06:43.197904Z\",\n \"etag\": \"AB3BwFrtf0Yl4fgnXW4leoEEANTAGQdOngyIqdQSJBT9pKEChgeXom-0OyH7dKtfvA4=\",\n \"videoClassificationDatasetMetadata\": {}\n}",
"# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split('/')[-1]\n\nprint(dataset_id)\n",
"projects.locations.datasets.importData\nRequest",
"input_config = {\n \"gcs_source\": {\n \"input_uris\": [IMPORT_FILE]\n }\n}\n\nprint(MessageToJson(\n automl.ImportDataRequest(\n name=dataset_short_id,\n input_config=input_config\n ).__dict__[\"_pb\"])\n)\n",
"Example output:\n{\n \"name\": \"VCN6574174086275006464\",\n \"inputConfig\": {\n \"gcsSource\": {\n \"inputUris\": [\n \"gs://automl-video-demo-data/hmdb_split1.csv\"\n ]\n }\n }\n}\nCall",
"request = clients[\"automl\"].import_data(\n name=dataset_id,\n input_config=input_config\n)\n",
"Response",
"result = request.result()\n\nprint(MessageToJson(result))\n",
"Example output:\n{}\nTrain a model\nprojects.locations.models.create\nRequest",
"model = {\n \"display_name\": \"hmdb_\" + TIMESTAMP,\n \"dataset_id\": dataset_short_id,\n \"video_classification_model_metadata\": {}\n}\n\nprint(MessageToJson(\n automl.CreateModelRequest(\n parent=PARENT,\n model=model\n ).__dict__[\"_pb\"])\n)\n",
"Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"model\": {\n \"displayName\": \"hmdb_20210228225744\",\n \"datasetId\": \"VCN6574174086275006464\",\n \"videoClassificationModelMetadata\": {}\n }\n}\nCall",
"request = clients[\"automl\"].create_model(\n parent=PARENT,\n model=model\n)\n",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))\n",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/VCN6188818900239515648\"\n}",
"# The full unique ID for the training pipeline\nmodel_id = result.name\n# The short numeric ID for the training pipeline\nmodel_short_id = model_id.split('/')[-1]\n\nprint(model_short_id)\n",
"Evaluate the model\nprojects.locations.models.modelEvaluations.list\nCall",
"request = clients[\"automl\"].list_model_evaluations(\n parent=model_id\n)\n",
"Response",
"import json\n\n\nmodel_evaluations = [\n json.loads(MessageToJson(me.__dict__[\"_pb\"])) for me in request \n]\n# The evaluation slice\nevaluation_slice = request.model_evaluation[0].name\n\nprint(json.dumps(model_evaluations, indent=2))\n",
"Example output\n```\n[\n {\n \"name\": \"projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266\",\n \"createTime\": \"2021-03-01T01:02:02.452298Z\",\n \"evaluatedExampleCount\": 150,\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 1.0,\n \"confidenceMetricsEntry\": [\n {\n \"confidenceThreshold\": 0.016075565,\n \"recall\": 1.0,\n \"precision\": 0.2,\n \"f1Score\": 0.33333334\n },\n {\n \"confidenceThreshold\": 0.017114623,\n \"recall\": 1.0,\n \"precision\": 0.202977,\n \"f1Score\": 0.3374578\n },\n # REMOVED FOR BREVITY\n\n {\n \"confidenceThreshold\": 0.9299338,\n \"recall\": 0.033333335,\n \"precision\": 1.0,\n \"f1Score\": 0.06451613\n }\n ]\n},\n\"displayName\": \"golf\"\n\n}\n]\n```\nprojects.locations.models.modelEvaluations.get\nCall",
"request = clients[\"automl\"].get_model_evaluation(\n name=evaluation_slice\n)\n",
"Response",
"print(MessageToJson(request.__dict__[\"_pb\"]))\n",
"Example output:\n```\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/VCN6188818900239515648/modelEvaluations/1998146574672720266\",\n \"createTime\": \"2021-03-01T01:02:02.452298Z\",\n \"evaluatedExampleCount\": 150,\n \"classificationEvaluationMetrics\": {\n \"auPrc\": 1.0,\n \"confidenceMetricsEntry\": [\n {\n \"confidenceThreshold\": 0.016075565,\n \"recall\": 1.0,\n \"precision\": 0.2,\n \"f1Score\": 0.33333334\n },\n {\n \"confidenceThreshold\": 0.017114623,\n \"recall\": 1.0,\n \"precision\": 0.202977,\n \"f1Score\": 0.3374578\n },\n # REMOVED FOR BREVITY\n\n {\n \"confidenceThreshold\": 0.9299338,\n \"recall\": 0.006666667,\n \"precision\": 1.0,\n \"f1Score\": 0.013245033\n }\n],\n\"confusionMatrix\": {\n \"annotationSpecId\": [\n \"175274248095399936\",\n \"2048771693081526272\",\n \"4354614702295220224\",\n \"6660457711508914176\",\n \"8966300720722608128\"\n ],\n \"row\": [\n {\n \"exampleCount\": [\n 30,\n 0,\n 0,\n 0,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 30,\n 0,\n 0,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 0,\n 30,\n 0,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 0,\n 0,\n 30,\n 0\n ]\n },\n {\n \"exampleCount\": [\n 0,\n 0,\n 0,\n 0,\n 30\n ]\n }\n ],\n \"displayName\": [\n \"ride_horse\",\n \"golf\",\n \"cartwheel\",\n \"pullup\",\n \"kick_ball\"\n ]\n}\n\n}\n}\n```\nMake batch predictions\nMake the batch input file\nTo request a batch of predictions from AutoML Video, create a CSV file that lists the Cloud Storage paths to the videos that you want to annotate. You can also specify a start and end time to tell AutoML Video to only annotate a segment (segment-level) of the video. The start time must be zero or greater and must be before the end time. The end time must be greater than the start time and less than or equal to the duration of the video. You can also use inf to indicate the end of a video.\nexample:\ngs://my-videos-vcm/short_video_1.avi,0.0,5.566667\ngs://my-videos-vcm/car_chase.avi,0.0,3.933333",
"TRAIN_FILES = \"gs://automl-video-demo-data/hmdb_split1_5classes_train_inf.csv\"\n\ntest_items = ! gsutil cat $TRAIN_FILES | head -n2\n\ncols = str(test_items[0]).split(',')\ntest_item_1, test_label_1, test_start_1, test_end_1 = str(cols[0]), str(cols[1]), str(cols[2]), str(cols[3])\nprint(test_item_1, test_label_1)\n\ncols = str(test_items[1]).split(',')\ntest_item_2, test_label_2, test_start_2, test_end_2 = str(cols[0]), str(cols[1]), str(cols[2]), str(cols[3])\nprint(test_item_2, test_label_2)\n",
"Example output:\ngs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi cartwheel\ngs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi cartwheel",
"import tensorflow as tf\nimport json\n\ngcs_input_uri = \"gs://\" + BUCKET_NAME + '/test.csv'\nwith tf.io.gfile.GFile(gcs_input_uri, 'w') as f:\n data = f\"{test_item_1}, {test_start_1}, {test_end_1}\"\n f.write(data + '\\n')\n data = f\"{test_item_2}, {test_start_2}, {test_end_2}\"\n f.write(data + '\\n')\n \nprint(gcs_input_uri)\n! gsutil cat $gcs_input_uri\n",
"Example output:\ngs://migration-ucaip-trainingaip-20210228225744/test.csv\ngs://automl-video-demo-data/hmdb51/_Rad_Schlag_die_Bank__cartwheel_f_cm_np1_le_med_0.avi, 0.0, inf\ngs://automl-video-demo-data/hmdb51/Acrobacias_de_un_fenomeno_cartwheel_f_cm_np1_ba_bad_8.avi, 0.0, inf\nprojects.locations.models.batchPredict\nRequest",
"input_config = {\n \"gcs_source\": {\n \"input_uris\": [gcs_input_uri]\n }\n}\n\noutput_config = {\n \"gcs_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/batch_output/\"\n }\n}\n\nbatch_prediction = automl.BatchPredictRequest(\n name=model_id,\n input_config=input_config,\n output_config=output_config\n)\n \nprint(MessageToJson(\n batch_prediction.__dict__[\"_pb\"])\n)\n",
"Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/models/VCN6188818900239515648\",\n \"inputConfig\": {\n \"gcsSource\": {\n \"inputUris\": [\n \"gs://migration-ucaip-trainingaip-20210228225744/test.csv\"\n ]\n }\n },\n \"outputConfig\": {\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210228225744/batch_output/\"\n }\n }\n}\nCall",
"request = clients[\"prediction\"].batch_predict(\n request=batch_prediction\n)\n",
"Response",
"result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))\n",
"Example output:\n{}\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.",
"delete_dataset = True\ndelete_model = True\ndelete_bucket = True\n\n# Delete the dataset using the AutoML fully qualified identifier for the dataset\ntry:\n if delete_dataset:\n clients['automl'].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the AutoML fully qualified identifier for the model\ntry:\n if delete_model:\n clients['automl'].delete_model(name=model_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and 'BUCKET_NAME' in globals():\n ! gsutil rm -r gs://$BUCKET_NAME\n"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ryan-leung/PHYS4650_Python_Tutorial | notebooks/05-Python-Functions-Class.ipynb | bsd-3-clause | [
"Python Functions and Classes\nSometimes you need to define your own functions to work with custom data or solve some problems. A function can be defined with a prefix def. A class is like an umbrella that can contains many data types and functions, it is defined by class prefix.\n<a href=\"https://colab.research.google.com/github/ryan-leung/PHYS4650_Python_Tutorial/blob/master/notebooks/05-Python-Functions-Class.ipynb\"><img align=\"right\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\">\n</a>\nFunctions",
"def hello(a,b):\n return a+b\n\n# Lazy definition of function\nhello(1,1)\n\nhello('a','b')",
"Class\nClass is a blueprint defining the charactaristics and behaviors of an object. \npython\nclass MyClass:\n ...\n ...\nFor a simple class, one shall define an instance\npython\n__init__()\nto handle variable when it created. Let's try the following example:",
"class Person:\n def __init__(self,age,salary):\n self.age = age\n self.salary = salary\n def out(self):\n print(self.age)\n print(self.salary)",
"This is a basic class definition, the age and salary are needed when creating this object. The new class can be invoked like this:",
"a = Person(30,10000)\na.out()",
"The __init__ initilaze the variables stored in the class. When they are called inside the class, we should add a self. in front of the variable. The out(Self) method are arbitary functions that can be used by calling Yourclass.yourfunction(). The input to the functions can be added after the self input.\nPython Conditionals And Loops\nThe for statement\nThe for statement reads\nfor xxx in yyyy:\nyyyy shall be an iteratable, i.e. tuple or list or sth that can be iterate. After this line, user should add an indentation at the start of next line, either by space or tab.\nConditionals\nA conditional statement is a programming concept that describes whether a region of code runs based on if a condition is true or false. The keywords involved in conditional statements are if, and optionally elif and else.",
"# make a list\nstudents = ['boy', 'boy', 'girl', 'boy', 'girl', 'girl', 'boy', 'boy', 'girl', 'girl', 'boy', 'boy']\n\nboys = 0; girls = 0\n\nfor s in students:\n if s == 'boy':\n boys = boys +1\n else:\n girls+=1\n \nprint(\"boys:\", boys)\nprint(\"girls:\", girls)",
"The While statement\nThe While statement reads\nwhile CONDITIONAL:\nCONDITIONAL is a conditional statement, like i < 100 or a boolean variable. After this line, user should add an indentation at the start of next line, either by space or tab.",
"def int_sum(n):\n s=0; i=1\n while i < n:\n s += i*i\n i += 1\n return s\nint_sum(1000)",
"Performance",
"%timeit int_sum(100000)",
"<img src=\"images/numba-blue-horizontal-rgb.svg\" alt=\"numba\" style=\"width: 600px;\"/>\n<img src=\"images/numba_features.png\" alt=\"numba\" style=\"width: 600px;\"/>\nNumba translates Python functions to optimized machine code at runtime using the LLVM compiler library. Your functions will be translated to c-code during declarations. To install numba, \npython\npip install numba",
"import numba \n\[email protected]\ndef int_sum_nb(n):\n s=0; i=1\n while i < n:\n s += i*i\n i += 1\n return s\nint_sum_nb(1000)\n\n%timeit int_sum_nb(100000)",
"Examples",
"import random\ndef monte_carlo_pi(n):\n acc = 0\n for i in range(n):\n x = random.random()\n y = random.random()\n if (x**2 + y**2) < 1.0:\n acc += 1\n return 4.0 * acc / n\n\nmonte_carlo_pi(1000000)\n\n%timeit monte_carlo_pi(1000000)\n\[email protected]\ndef monte_carlo_pi_nb(n):\n acc = 0\n for i in range(n):\n x = random.random()\n y = random.random()\n if (x**2 + y**2) < 1.0:\n acc += 1\n return 4.0 * acc / n\n\nmonte_carlo_pi_nb(1000000)\n\n%timeit monte_carlo_pi_nb(1000000)\n\[email protected]\ndef monte_carlo_pi_nbmt(n):\n acc = 0\n for i in numba.prange(n):\n x = random.random()\n y = random.random()\n if (x**2 + y**2) < 1.0:\n acc += 1\n return 4.0 * acc / n\n\nmonte_carlo_pi_nbmt(1000000)\n\n%timeit monte_carlo_pi_nbmt(1000000)",
"Summary\nPython loops and recursive are not recommended because it needs a lot of system overhead to produce a function calls and check typing. But new tools are avaliable to convert these codes into high performance code."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gprakhar/janCC | Janacare_Habits_dataset_upto-7May2016.ipynb | bsd-3-clause | [
"Hello World!\nThis notebook describes the effort filter out users to resurrect with Digital Marketing\nClean up data\nde-duplicate : based on email i'd\nPartitioning the Data:\ntwo methods - \nA) cluster the data and see how many clusters are there: used MeanShift method\nB) Bin the data based on age_on_platform\nemail capaign to Ressurrect users\nApril 30th will be the cuttoff for the first_login value, for Binning\nLooking around the data set",
"%reset\n\n# Import the required modules\nimport pandas as pd\nimport numpy as np\nimport scipy as sp\n\n# simple function to read in the user data file.\n# the argument parse_dates takes in a list of colums, which are to be parsed as date format\nuser_data_raw_csv = pd.read_csv(\"/home/eyebell/local_bin/janacare/janCC/datasets/Habits-Data_upto-7th-May.csv\",\\\n parse_dates = [-3, -2, -1])\n\n# import the pyexcel module\n#import pyexcel as pe\n#from pyexcel.ext import xls\n\n# load the file\n#records = pe.get_records(file_name=\"/home/eyebell/local_bin/janacare/datasets/Habits-Data_upto-7th-May.xls\")\n#len(records)\n#for record in records:\n #print record\n\n# data metrics\nuser_data_raw_csv.shape # Rows , colums\n\n# data metrics\nuser_data_raw_csv.dtypes # data type of colums\n\nuser_data_to_clean = user_data_raw_csv.copy()\n\n# Some basic statistical information on the data\n#user_data_to_clean.describe()",
"Data Clean up\nIn the last section of looking around, I saw that a lot of rows do not have any values or have garbage values(see first row of the table above).\nThis can cause errors when computing anything using the values in these rows, hence a clean up is required.\nIf a the coulums last_activity and first_login are empty then drop the corresponding row !",
"# Lets check the health of the data set\nuser_data_to_clean.info()",
"As is visible from the last column (age_on_platform) data type, Pandas is not recognising it as date type format. \nThis will make things difficult, so I delete this particular column and add a new one.\nSince the data in age_on_platform can be recreated by doing age_on_platform = last_activity - first_login \nBut on eyeballing I noticed some, cells of column first_login have greater value than corresponding cell of last_activity. These cells need to be swapped, since its not possible to have first_login > last_activity\nFinally the columns first_login, last_activity have missing values, as evident from above table. Since this is time data, that in my opinion should not be imputed, we will drop/delete the columns.",
"# Run a loop through the data frame and check each row for this anamoly, if found drop,\n# this is being done ONLY for selected columns\n\nimport datetime\n\nswapped_count = 0\nfirst_login_count = 0\nlast_activity_count = 0\nemail_count = 0\nuserid_count = 0\n\nfor index, row in user_data_to_clean.iterrows(): \n if row.last_activity == pd.NaT or row.last_activity != row.last_activity:\n last_activity_count = last_activity_count + 1\n #print row.last_activity\n user_data_to_clean.drop(index, inplace=True)\n\n elif row.first_login > row.last_activity:\n user_data_to_clean.drop(index, inplace=True)\n swapped_count = swapped_count + 1\n\n elif row.first_login != row.first_login or row.first_login == pd.NaT:\n user_data_to_clean.drop(index, inplace=True)\n first_login_count = first_login_count + 1\n\n elif row.email != row.email: #or row.email == '' or row.email == ' ':\n user_data_to_clean.drop(index, inplace=True)\n email_count = email_count + 1\n\n elif row.user_id != row.user_id:\n user_data_to_clean.drop(index, inplace=True)\n userid_count = userid_count + 1\n\nprint \"last_activity_count=%d\\tswapped_count=%d\\tfirst_login_count=%d\\temail_count=%d\\tuserid_count=%d\" \\\n% (last_activity_count, swapped_count, first_login_count, email_count, userid_count)\n\nuser_data_to_clean.shape\n\n# Create new column 'age_on_platform' which has the corresponding value in date type format\nuser_data_to_clean[\"age_on_platform\"] = user_data_to_clean[\"last_activity\"] - user_data_to_clean[\"first_login\"]\n\nuser_data_to_clean.info()",
"Validate if email i'd is correctly formatted and the email i'd really exists",
"from validate_email import validate_email\n\nemail_count_invalid = 0\nfor index, row in user_data_to_clean.iterrows(): \n if not validate_email(row.email): # , verify=True) for checking if email i'd actually exits\n user_data_to_clean.drop(index, inplace=True)\n email_count_invalid = email_count_invalid + 1\n \nprint \"Number of email-id invalid: %d\" % (email_count_invalid)\n\n\n# Check the result of last operation \nuser_data_to_clean.info()",
"Remove duplicates",
"user_data_to_deDuplicate = user_data_to_clean.copy()\n\nuser_data_deDuplicateD = user_data_to_deDuplicate.loc[~user_data_to_deDuplicate.email.str.strip().duplicated()]\nlen(user_data_deDuplicateD)\n\nuser_data_deDuplicateD.info()\n\n# Now its time to convert the timedelta64 data type column named age_on_platform to seconds\ndef convert_timedelta64_to_sec(td64):\n ts = (td64 / np.timedelta64(1, 's'))\n return ts\n\nuser_data_deDuplicateD_timedelta64_converted = user_data_deDuplicateD.copy()\ntemp_copy = user_data_deDuplicateD.copy()\nuser_data_deDuplicateD_timedelta64_converted.drop(\"age_on_platform\", 1)\nuser_data_deDuplicateD_timedelta64_converted['age_on_platform'] = temp_copy['age_on_platform'].apply(convert_timedelta64_to_sec)\n\n\nuser_data_deDuplicateD_timedelta64_converted.info()",
"Clustering using Mean shift\nfrom sklearn.cluster import MeanShift, estimate_bandwidth\nx = [1,1,5,6,1,5,10,22,23,23,50,51,51,52,100,112,130,500,512,600,12000,12230]\nx = pd.Series(user_data_deDuplicateD_timedelta64_converted['age_on_platform'])\nX = np.array(zip(x,np.zeros(len(x))), dtype=np.int)\n'''--\nbandwidth = estimate_bandwidth(X, quantile=0.2)\nms = MeanShift(bandwidth=bandwidth, bin_seeding=True)\nms.fit(X)\nlabels = ms.labels_\ncluster_centers = ms.cluster_centers_\nlabels_unique = np.unique(labels)\nn_clusters_ = len(labels_unique)\nfor k in range(n_clusters_):\n my_members = labels == k\n print \"cluster {0} : lenght = {1}\".format(k, len(X[my_members, 0]))\n #print \"cluster {0}: {1}\".format(k, X[my_members, 0])\n cluster_sorted = sorted(X[my_members, 0])\n print \"cluster {0} : Max = {2} days & Min {1} days\".format(k, cluster_sorted[0]1.15741e-5, cluster_sorted[-1]1.15741e-5)\n'''\nThe following bandwidth can be automatically detected using\nbandwidth = estimate_bandwidth(X, quantile=0.7)\nms = MeanShift(bandwidth=bandwidth, bin_seeding=True)\nms.fit(X)\nlabels = ms.labels_\ncluster_centers = ms.cluster_centers_\nlabels_unique = np.unique(labels)\nn_clusters_ = len(labels_unique)\nprint(\"number of estimated clusters : %d\" % n_clusters_)\nfor k in range(n_clusters_):\n my_members = labels == k\n print \"cluster {0} : lenght = {1}\".format(k, len(X[my_members, 0]))\n cluster_sorted = sorted(X[my_members, 0])\n print \"cluster {0} : Min = {1} days & Max {2} days\".format(k, cluster_sorted[0]1.15741e-5, cluster_sorted[-1]1.15741e-5)\nPlot result\nimport matplotlib.pyplot as plt\nfrom itertools import cycle\n%matplotlib inline\nplt.figure(1)\nplt.clf()\ncolors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')\nfor k, col in zip(range(n_clusters_), colors):\n my_members = labels == k\n cluster_center = cluster_centers[k]\n plt.plot(X[my_members, 0], X[my_members, 1], col + '.')\n plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,\n markeredgecolor='k', markersize=14)\nplt.title('Estimated number of clusters: %d' % n_clusters_)\nplt.show()",
"# Clustering using Kmeans, not working\n'''\ny = [1,1,5,6,1,5,10,22,23,23,50,51,51,52,100,112,130,500,512,600,12000,12230]\ny_float = map(float, y)\nx = range(len(y))\nx_float = map(float, x)\n\nm = np.matrix([x_float, y_float]).transpose()\n\n\nfrom scipy.cluster.vq import kmeans\nkclust = kmeans(m, 5)\n\nkclust[0][:, 0]\n\nassigned_clusters = [abs(cluster_indices - e).argmin() for e in x]\n'''",
"Binning based on age_on_platform\nday 1; day 2; week 1; week 2; week 3; week 4; week 6; week 8; week 12; 3 months; 6 months; 1 year;",
"user_data_binned = user_data_deDuplicateD_timedelta64_converted.copy()\n \n# function to convert age_on_platform in seconds to hours\nconvert_sec_to_hr = lambda x: x/3600\nuser_data_binned[\"age_on_platform\"] = user_data_binned['age_on_platform'].map(convert_sec_to_hr).copy()\n\n# filter rows based on first_login value after 30th April\nuser_data_binned_post30thApril = user_data_binned[user_data_binned.first_login < datetime.datetime(2016, 4, 30)]\n\nfor index, row in user_data_binned_post30thApril.iterrows():\n if row[\"age_on_platform\"] < 25:\n user_data_binned_post30thApril.set_value(index, 'bin', 1)\n \n elif row[\"age_on_platform\"] >= 25 and row[\"age_on_platform\"] < 49:\n user_data_binned_post30thApril.set_value(index, 'bin', 2) \n \n elif row[\"age_on_platform\"] >= 49 and row[\"age_on_platform\"] < 169: #168 hrs = 1 week\n user_data_binned_post30thApril.set_value(index, 'bin', 3)\n \n elif row[\"age_on_platform\"] >=169 and row[\"age_on_platform\"] < 337: # 336 hrs = 2 weeks\n user_data_binned_post30thApril.set_value(index, 'bin', 4)\n \n elif row[\"age_on_platform\"] >=337 and row[\"age_on_platform\"] < 505: # 504 hrs = 3 weeks\n user_data_binned_post30thApril.set_value(index, 'bin', 5)\n \n elif row[\"age_on_platform\"] >=505 and row[\"age_on_platform\"] < 673: # 672 hrs = 4 weeks\n user_data_binned_post30thApril.set_value(index, 'bin', 6)\n \n elif row[\"age_on_platform\"] >=673 and row[\"age_on_platform\"] < 1009: # 1008 hrs = 6 weeks\n user_data_binned_post30thApril.set_value(index, 'bin', 7)\n \n elif row[\"age_on_platform\"] >=1009 and row[\"age_on_platform\"] < 1345: # 1344 hrs = 8 weeks\n user_data_binned_post30thApril.set_value(index, 'bin', 8)\n \n elif row[\"age_on_platform\"] >=1345 and row[\"age_on_platform\"] < 2017: # 2016 hrs = 12 weeks\n user_data_binned_post30thApril.set_value(index, 'bin', 9)\n \n elif row[\"age_on_platform\"] >=2017 and row[\"age_on_platform\"] < 4381: # 4380 hrs = 6 months\n user_data_binned_post30thApril.set_value(index, 'bin', 10)\n \n elif row[\"age_on_platform\"] >=4381 and row[\"age_on_platform\"] < 8761: # 8760 hrs = 12 months\n user_data_binned_post30thApril.set_value(index, 'bin', 11)\n \n elif row[\"age_on_platform\"] > 8761: # Rest, ie. beyond 1 year\n user_data_binned_post30thApril.set_value(index, 'bin', 12)\n \n else:\n user_data_binned_post30thApril.set_value(index, 'bin', 0)\n \n\nuser_data_binned_post30thApril.info()\n\nprint \"Number of users with age_on_platform equal to 1 day or less, aka 0th day = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 1])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 1].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_0day.csv\", index=False)\n\nprint \"Number of users with age_on_platform between 1st and 2nd days = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 2])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 2].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_1st-day.csv\", index=False)\n\nprint \"Number of users with age_on_platform greater than or equal to 2 complete days and less than 1 week = %d\" % \\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 3])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 3].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_1st-week.csv\", index=False)\n\nprint \"Number of users with age_on_platform between 2nd week = %d\" % \\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 4])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 4].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_2nd-week.csv\", index=False)\n\nprint \"Number of users with age_on_platform between 3rd weeks = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 5])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 5].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_3rd-week.csv\", index=False)\n\nprint \"Number of users with age_on_platform between 4th weeks = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 6])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 6].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_4th-week.csv\", index=False)\n\nprint \"Number of users with age_on_platform greater than or equal to 4 weeks and less than 6 weeks = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 7])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 7].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_4th-to-6th-week.csv\", index=False)\n\nprint \"Number of users with age_on_platform greater than or equal to 6 weeks and less than 8 weeks = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 8])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 8].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_6th-to-8th-week.csv\", index=False)\n\nprint \"Number of users with age_on_platform greater than or equal to 8 weeks and less than 12 weeks = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 9])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 9].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_8th-to-12th-week.csv\", index=False)\n\nprint \"Number of users with age_on_platform greater than or equal to 12 weeks and less than 6 months = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 10])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 10].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_12thweek-to-6thmonth.csv\", index=False)\n\nprint \"Number of users with age_on_platform greater than or equal to 6 months and less than 1 year = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 11])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 11].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_6thmonth-to-1year.csv\", index=False)\n\nprint \"Number of users with age_on_platform greater than 1 year = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 12])\nuser_data_binned_post30thApril[user_data_binned_post30thApril.bin == 12].to_csv\\\n(\"/home/eyebell/local_bin/janacare/janCC/datasets/user_retention_email-campaign/user_data_binned_post30thApril_beyond-1year.csv\", index=False)\n\nprint \"Number of users with age_on_platform is wierd = %d\" %\\\nlen(user_data_binned_post30thApril[user_data_binned_post30thApril.bin == 0])\n\n# Save dataframe with binned values as CSV\n#user_data_binned_post30thApril.to_csv('user_data_binned_post30thApril.csv')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/en-snapshot/addons/tutorials/optimizers_lazyadam.ipynb | apache-2.0 | [
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TensorFlow Addons Optimizers: LazyAdam\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/addons/tutorials/optimizers_lazyadam\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/addons/blob/master/docs/tutorials/optimizers_lazyadam.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/optimizers_lazyadam.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nThis notebook will demonstrate how to use the lazy adam optimizer from the Addons package.\nLazyAdam\n\nLazyAdam is a variant of the Adam optimizer that handles sparse updates more efficiently.\n The original Adam algorithm maintains two moving-average accumulators for\n each trainable variable; the accumulators are updated at every step.\n This class provides lazier handling of gradient updates for sparse\n variables. It only updates moving-average accumulators for sparse variable\n indices that appear in the current batch, rather than updating the\n accumulators for all indices. Compared with the original Adam optimizer,\n it can provide large improvements in model training throughput for some\n applications. However, it provides slightly different semantics than the\n original Adam algorithm, and may lead to different empirical results.\n\nSetup",
"!pip install -U tensorflow-addons\n\nimport tensorflow as tf\nimport tensorflow_addons as tfa\n\n# Hyperparameters\nbatch_size=64\nepochs=10",
"Build the Model",
"model = tf.keras.Sequential([\n tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),\n tf.keras.layers.Dense(64, activation='relu', name='dense_2'),\n tf.keras.layers.Dense(10, activation='softmax', name='predictions'),\n])",
"Prepare the Data",
"# Load MNIST dataset as NumPy arrays\ndataset = {}\nnum_validation = 10000\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n\n# Preprocess the data\nx_train = x_train.reshape(-1, 784).astype('float32') / 255\nx_test = x_test.reshape(-1, 784).astype('float32') / 255",
"Train and Evaluate\nSimply replace typical keras optimizers with the new tfa optimizer",
"# Compile the model\nmodel.compile(\n optimizer=tfa.optimizers.LazyAdam(0.001), # Utilize TFA optimizer\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=['accuracy'])\n\n# Train the network\nhistory = model.fit(\n x_train,\n y_train,\n batch_size=batch_size,\n epochs=epochs)\n\n\n# Evaluate the network\nprint('Evaluate on test data:')\nresults = model.evaluate(x_test, y_test, batch_size=128, verbose = 2)\nprint('Test loss = {0}, Test acc: {1}'.format(results[0], results[1]))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
peterwittek/qml-rg | Archiv_Session_Spring_2017/Exercises/05_APS Captcha.ipynb | gpl-3.0 | [
"import keras\nimport itertools as it\nimport matplotlib.pyplot as pl\nfrom tempfile import TemporaryDirectory\n\nTMPDIR = TemporaryDirectory()\nkeras.backend.set_image_data_format('channels_first')",
"Preprocessing",
"import os \nfrom skimage import io\nfrom skimage.color import rgb2gray\nfrom skimage import transform\nfrom math import ceil\n\n\nIMGSIZE = (100, 100)\n\ndef load_images(folder, scalefactor=(2, 2), labeldict=None):\n images = []\n labels = []\n files = os.listdir(folder)\n \n for file in (fname for fname in files if fname.endswith('.png')):\n \n img = io.imread(folder + file).astype(float)\n img = rgb2gray(img)\n # Crop since some of the real world pictures are other shape\n img = img[:IMGSIZE[0], :IMGSIZE[1]]\n # Possibly downscale to speed up processing\n img = transform.downscale_local_mean(img, scalefactor)\n # normalize image range\n img -= np.min(img)\n img /= np.max(img)\n images.append(img)\n \n if labeldict is not None:\n # lookup label for real world data in dict generated from labels.txt\n key, _ = os.path.splitext(file)\n labels.append(labeldict[key])\n else:\n # infere label from filename\n if file.find(\"einstein\") > -1 or file.find(\"curie\") > -1:\n labels.append(1)\n else:\n labels.append(0)\n \n return np.asarray(images)[:, None], np.asarray(labels)\n\nx_train, y_train = load_images('data/aps/train/')\n# Artifically pad Einstein's and Curie't to have balanced training set\n# ok, since we use data augmentation later anyway\nsel = y_train == 1\nrepeats = len(sel) // sum(sel) - 1\nx_train = np.concatenate((x_train[~sel], np.repeat(x_train[sel], repeats, axis=0)),\n axis=0)\ny_train = np.concatenate((y_train[~sel], np.repeat(y_train[sel], repeats, axis=0)),\n axis=0)\n\nx_test, y_test = load_images('data/aps/test/')\n\nrw_labels = {str(key): 0 if label == 0 else 1\n for key, label in np.loadtxt('data/aps/real_world/labels.txt', dtype=int)}\nx_rw, y_rw = load_images('data/aps/real_world/', labeldict=rw_labels)\n\nfrom mpl_toolkits.axes_grid import ImageGrid\nfrom math import ceil\n\ndef imsshow(images, grid=(5, -1)):\n assert any(g > 0 for g in grid)\n \n grid_x = grid[0] if grid[0] > 0 else ceil(len(images) / grid[1])\n grid_y = grid[1] if grid[1] > 0 else ceil(len(images) / grid[0])\n \n axes = ImageGrid(pl.gcf(), \"111\", (grid_y, grid_x), share_all=True)\n for ax, img in zip(axes, images):\n ax.get_xaxis().set_ticks([])\n ax.get_yaxis().set_ticks([])\n ax.imshow(img[0], cmap='gray')\n \npl.figure(0, figsize=(16, 10))\nimsshow(x_train, grid=(5, 1))\npl.show()\n\npl.figure(0, figsize=(16, 10))\nimsshow(x_train[::-4], grid=(5, 1))\npl.show()\n\nfrom keras.preprocessing.image import ImageDataGenerator\n\nimggen = ImageDataGenerator(rotation_range=20, \n width_shift_range=0.15,\n height_shift_range=0.15,\n shear_range=0.4,\n fill_mode='constant',\n cval=1.,\n zoom_range=0.3,\n channel_shift_range=0.1)\nimggen.fit(x_train)\n\nfor batch in it.islice(imggen.flow(x_train, batch_size=5), 2):\n pl.figure(0, figsize=(16, 5))\n imsshow(batch, grid=(5, 1))\n pl.show()",
"Training LeNet\nFirst, we will train a simple CNN with a single hidden fully connected layer as a classifier.",
"from keras.layers import Conv2D, Dense, Flatten, MaxPooling2D\nfrom keras.models import Sequential\nfrom keras.backend import image_data_format\n\n\ndef generate(figsize, nr_classes, cunits=[20, 50], fcunits=[500]):\n model = Sequential()\n cunits = list(cunits)\n input_shape = figsize + (1,) if image_data_format == 'channels_last' \\\n else (1,) + figsize\n\n model.add(Conv2D(cunits[0], (5, 5), padding='same',\n activation='relu', input_shape=input_shape))\n model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n\n # Convolutional layers\n for nr_units in cunits[1:]:\n model.add(Conv2D(nr_units, (5, 5), padding='same',\n activation='relu'))\n model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))\n\n # Fully connected layers\n model.add(Flatten())\n for nr_units in fcunits:\n model.add(Dense(nr_units, activation='relu'))\n\n # Output layer\n activation = 'softmax' if nr_classes > 1 else 'sigmoid'\n model.add(Dense(nr_classes, activation=activation))\n\n return model\n\nfrom keras.optimizers import Adam\nfrom keras.models import load_model\n\ntry:\n model = load_model('aps_lenet.h5')\n print(\"Model succesfully loaded...\")\nexcept OSError:\n print(\"Saved model not found, traing...\")\n model = generate(figsize=x_train.shape[-2:], nr_classes=1,\n cunits=[24, 48], fcunits=[100])\n optimizer = Adam()\n model.compile(loss='binary_crossentropy', optimizer=optimizer,\n metrics=['accuracy'])\n\n model.fit_generator(imggen.flow(x_train, y_train, batch_size=len(x_train)), \n validation_data=imggen.flow(x_test, y_test),\n steps_per_epoch=100, epochs=5,\n verbose=1, validation_steps=256)\n model.save('aps_lenet.h5')\n\nfrom sklearn.metrics import confusion_matrix\n\ndef plot_cm(cm, classes, normalize=False, \n title='Confusion matrix', cmap=pl.cm.viridis):\n \"\"\"\n This function prints and plots the confusion matrix.\n Normalization can be applied by setting `normalize=True`.\n \"\"\"\n if normalize:\n cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n \n pl.imshow(cm, interpolation='nearest', cmap=cmap)\n pl.title(title)\n pl.colorbar()\n tick_marks = np.arange(len(classes))\n pl.xticks(tick_marks, classes, rotation=45)\n pl.yticks(tick_marks, classes)\n\n thresh = cm.max() / 2.\n for i, j in it.product(range(cm.shape[0]), range(cm.shape[1])):\n pl.text(j, i, cm[i, j],\n horizontalalignment=\"center\",\n color=\"white\" if cm[i, j] > thresh else \"black\")\n\n pl.tight_layout()\n pl.ylabel('True label')\n pl.xlabel('Predicted label')\n\ny_pred_rw = model.predict_classes(x_rw, verbose=0).ravel()\nplot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,\n classes=[\"Not Einstein\", \"Einstein\"])",
"Training Random Forests\nPreprocessing to a fixed size training set since sklearn doesn't suppport streaming training sets?",
"# Same size training set as LeNet\nTRAININGSET_SIZE = len(x_train) * 5 * 100\n\nbatch_size = len(x_train)\nnr_batches = TRAININGSET_SIZE // batch_size + 1\nimgit = imggen.flow(x_train, y=y_train, batch_size=batch_size)\nx_train_sampled = np.empty((TRAININGSET_SIZE, 1,) + x_train.shape[-2:])\ny_train_sampled = np.empty(TRAININGSET_SIZE)\n\nfor batch, (x_batch, y_batch) in enumerate(it.islice(imgit, nr_batches)):\n buflen = len(x_train_sampled[batch * batch_size:(batch + 1) * batch_size])\n x_train_sampled[batch * batch_size:(batch + 1) * batch_size] = x_batch[:buflen]\n y_train_sampled[batch * batch_size:(batch + 1) * batch_size] = y_batch[:buflen]\n\nfrom sklearn.ensemble import RandomForestClassifier\n\nrfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1,\n verbose=True)\nrfe = rfe.fit(x_train_sampled.reshape((TRAININGSET_SIZE, -1)), y_train_sampled)\n\ny_pred_rw = rfe.predict(x_rw.reshape((len(x_rw), -1)))\nplot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,\n classes=[\"Not Einstein\", \"Einstein\"])\npl.show()\n\nprint(\"Rightly classified Einsteins:\")\nimsshow(x_rw[((y_rw - y_pred_rw) == 0) * (y_rw == 1)])\npl.show()\n\nprint(\"Wrongly classified images:\")\nimsshow(x_rw[(y_rw - y_pred_rw) != 0])\npl.show()",
"So training on raw pixel values might not be a good idea. Let's build a feature extractor based on the trained LeNet (or any other pretrained image classifier)",
"model = load_model('aps_lenet.h5')\nenc_layers = it.takewhile(lambda l: not isinstance(l, keras.layers.Flatten), \n model.layers)\nencoder_model = keras.models.Sequential(enc_layers)\nencoder_model.add(keras.layers.Flatten())\nx_train_sampled_enc = encoder_model.predict(x_train_sampled, verbose=True)\n\nrfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1,\n verbose=True)\nrfe = rfe.fit(x_train_sampled_enc, y_train_sampled)\n\ny_pred_rw = rfe.predict(encoder_model.predict(x_rw, verbose=False))\nplot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,\n classes=[\"Not Einstein\", \"Einstein\"])\npl.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
robotcator/gensim | gensim Quick Start.ipynb | lgpl-2.1 | [
"# Getting Started with gensim\nThe goal of this tutorial is to get a new user up-and-running with gensim. This notebook covers the following objectives.\n## Objectives\n\nInstalling gensim.\nAccessing the gensim Jupyter notebook tutorials.\nPresenting the core concepts behind the library.\n\nInstalling gensim\nBefore we can start using gensim for natural language processing (NLP), you will need to install Python along with gensim and its dependences. It is suggested that a new user install a prepackaged python distribution and a number of popular distributions are listed below.\n\nAnaconda \nEPD \nWinPython \n\nOnce Python is installed, we will use pip to install the gensim library. First, we will make sure that Python is installed and accessible from the command line. From the command line, execute the following command:\nwhich python\n\nThe resulting address should correspond to the Python distribution that you installed above. Now that we have verified that we are using the correct version of Python, we can install gensim from the command line as follows:\npip install -U gensim\n\nTo verify that gensim was installed correctly, you can activate Python from the command line and execute import gensim\n$ python\nPython 3.5.1 |Anaconda custom (x86_64)| (default, Jun 15 2016, 16:14:02)\n[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import gensim\n>>> # No error is a good thing\n>>> exit()\n\nNote: Windows users that are following long should either use Windows subsystem for Linux or another bash implementation for Windows, such as Git bash\nAccessing the gensim Jupyter notebooks\nAll of the gensim tutorials (including this document) are stored in Jupyter notebooks. These notebooks allow the user to run the code locally while working through the material. If you would like to run a tutorial locally, first clone the GitHub repository for the project.\nbash\n $ git clone https://github.com/RaRe-Technologies/gensim.git \nNext, start a Jupyter notebook server. This is accomplished using the following bash commands (or starting the notebook server from the GUI application).\nbash\n $ cd gensim\n $ pwd\n /Users/user1/home/gensim\n $ cd docs/notebooks\n $ jupyter notebook \nAfter a few moments, Jupyter will open a web page in your browser and you can access each tutorial by clicking on the corresponding link. \n<img src=\"jupyter_home.png\">\nThis will open the corresponding notebook in a separate tab. The Python code in the notebook can be executed by selecting/clicking on a cell and pressing SHIFT + ENTER.\n<img src=\"jupyter_execute_cell.png\">\nNote: The order of cell execution matters. Be sure to run all of the code cells in order from top to bottom, you you might encounter errors.\nCore Concepts and Simple Example\nThis section introduces the basic concepts and terms needed to understand and use gensim and provides a simple usage example. In particular, we will build a model that measures the importance of a particular word.\nAt a very high-level, gensim is a tool for discovering the semantic structure of documents by examining the patterns of words (or higher-level structures such as entire sentences or documents). gensim accomplishes this by taking a corpus, a collection of text documents, and producing a vector representation of the text in the corpus. The vector representation can then be used to train a model, which is an algorithms to create different representations of the data, which are usually more semantic. These three concepts are key to understanding how gensim works so let's take a moment to explain what each of them means. At the same time, we'll work through a simple example that illustrates each of them.\nCorpus\nA corpus is a collection of digital documents. This collection is the input to gensim from which it will infer the structure of the documents, their topics, etc. The latent structure inferred from the corpus can later be used to assign topics to new documents which were not present in the training corpus. For this reason, we also refer to this collection as the training corpus. No human intervention (such as tagging the documents by hand) is required - the topic classification is unsupervised.\nFor our corpus, we'll use a list of 9 strings, each consisting of only a single sentence.",
"raw_corpus = [\"Human machine interface for lab abc computer applications\",\n \"A survey of user opinion of computer system response time\",\n \"The EPS user interface management system\",\n \"System and human system engineering testing of EPS\", \n \"Relation of user perceived response time to error measurement\",\n \"The generation of random binary unordered trees\",\n \"The intersection graph of paths in trees\",\n \"Graph minors IV Widths of trees and well quasi ordering\",\n \"Graph minors A survey\"]",
"This is a particularly small example of a corpus for illustration purposes. Another example could be a list of all the plays written by Shakespeare, list of all wikipedia articles, or all tweets by a particular person of interest.\nAfter collecting our corpus, there are typically a number of preprocessing steps we want to undertake. We'll keep it simple and just remove some commonly used English words (such as 'the') and words that occur only once in the corpus. In the process of doing so, we'll tokenize our data. Tokenization breaks up the documents into words (in this case using space as a delimiter).",
"# Create a set of frequent words\nstoplist = set('for a of the and to in'.split(' '))\n# Lowercase each document, split it by white space and filter out stopwords\ntexts = [[word for word in document.lower().split() if word not in stoplist]\n for document in raw_corpus]\n\n# Count word frequencies\nfrom collections import defaultdict\nfrequency = defaultdict(int)\nfor text in texts:\n for token in text:\n frequency[token] += 1\n\n# Only keep words that appear more than once\nprocessed_corpus = [[token for token in text if frequency[token] > 1] for text in texts]\nprocessed_corpus",
"Before proceeding, we want to associate each word in the corpus with a unique integer ID. We can do this using the gensim.corpora.Dictionary class. This dictionary defines the vocabulary of all words that our processing knows about.",
"from gensim import corpora\n\ndictionary = corpora.Dictionary(processed_corpus)\nprint(dictionary)",
"Because our corpus is small, there is only 12 different tokens in this Dictionary. For larger corpuses, dictionaries that contains hundreds of thousands of tokens are quite common.\nVector\nTo infer the latent structure in our corpus we need a way to represent documents that we can manipulate mathematically. One approach is to represent each document as a vector. There are various approaches for creating a vector representation of a document but a simple example is the bag-of-words model. Under the bag-of-words model each document is represented by a vector containing the frequency counts of each word in the dictionary. For example, given a dictionary containing the words ['coffee', 'milk', 'sugar', 'spoon'] a document consisting of the string \"coffee milk coffee\" could be represented by the vector [2, 1, 0, 0] where the entries of the vector are (in order) the occurrences of \"coffee\", \"milk\", \"sugar\" and \"spoon\" in the document. The length of the vector is the number of entries in the dictionary. One of the main properties of the bag-of-words model is that it completely ignores the order of the tokens in the document that is encoded, which is where the name bag-of-words comes from.\nOur processed corpus has 12 unique words in it, which means that each document will be represented by a 12-dimensional vector under the bag-of-words model. We can use the dictionary to turn tokenized documents into these 12-dimensional vectors. We can see what these IDs correspond to:",
"print(dictionary.token2id)",
"For example, suppose we wanted to vectorize the phrase \"Human computer interaction\" (note that this phrase was not in our original corpus). We can create the bag-of-word representation for a document using the doc2bow method of the dictionary, which returns a sparse representation of the word counts:",
"new_doc = \"Human computer interaction\"\nnew_vec = dictionary.doc2bow(new_doc.lower().split())\nnew_vec",
"The first entry in each tuple corresponds to the ID of the token in the dictionary, the second corresponds to the count of this token.\nNote that \"interaction\" did not occur in the original corpus and so it was not included in the vectorization. Also note that this vector only contains entries for words that actually appeared in the document. Because any given document will only contain a few words out of the many words in the dictionary, words that do not appear in the vectorization are represented as implicitly zero as a space saving measure.\nWe can convert our entire original corpus to a list of vectors:",
"bow_corpus = [dictionary.doc2bow(text) for text in processed_corpus]\nbow_corpus",
"Note that while this list lives entirely in memory, in most applications you will want a more scalable solution. Luckily, gensim allows you to use any iterator that returns a single document vector at a time. See the documentation for more details.\nModel\nNow that we have vectorized our corpus we can begin to transform it using models. We use model as an abstract term referring to a transformation from one document representation to another. In gensim, documents are represented as vectors so a model can be thought of as a transformation between two vector spaces. The details of this transformation are learned from the training corpus.\nOne simple example of a model is tf-idf. The tf-idf model transforms vectors from the bag-of-words representation to a vector space, where the frequency counts are weighted according to the relative rarity of each word in the corpus.\nHere's a simple example. Let's initialize the tf-idf model, training it on our corpus and transforming the string \"system minors\":",
"from gensim import models\n# train the model\ntfidf = models.TfidfModel(bow_corpus)\n# transform the \"system minors\" string\ntfidf[dictionary.doc2bow(\"system minors\".lower().split())]",
"The tfidf model again returns a list of tuples, where the first entry is the token ID and the second entry is the tf-idf weighting. Note that the ID corresponding to \"system\" (which occurred 4 times in the original corpus) has been weighted lower than the ID corresponding to \"minors\" (which only occurred twice).\ngensim offers a number of different models/transformations. See Transformations and Topics for details.\nNext Steps\nInterested in learning more about gensim? Please read through the following notebooks.\n\nCorpora_and_Vector_Spaces.ipynb\nword2vec.ipynb"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
guilgautier/DPPy | notebooks/fast_sampling_of_beta_ensembles.ipynb | mit | [
"Companion notebook of the paper Fast sampling of $\\beta$-ensembles\nby Guillaume Gautier, Rémi Bardenet, and Michal Valko\nSee also the arXiv preprint: 2003.02344 \n<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Companion-notebook-of-the-paper-Fast-sampling-of-$\\beta$-ensembles\" data-toc-modified-id=\"Companion-notebook-of-the-paper-Fast-sampling-of-$\\beta$-ensembles-1\"><span class=\"toc-item-num\">1 </span>Companion notebook of the paper <em>Fast sampling of $\\beta$-ensembles</em></a></span><ul class=\"toc-item\"><li><span><a href=\"#Summary\" data-toc-modified-id=\"Summary-1.1\"><span class=\"toc-item-num\">1.1 </span>Summary</a></span></li></ul></li><li><span><a href=\"#Imports\" data-toc-modified-id=\"Imports-2\"><span class=\"toc-item-num\">2 </span>Imports</a></span><ul class=\"toc-item\"><li><span><a href=\"#$V(x)-=-g_{2m}-x^{2m}$\" data-toc-modified-id=\"$V(x)-=-g_{2m}-x^{2m}$-2.1\"><span class=\"toc-item-num\">2.1 </span>$V(x) = g_{2m} x^{2m}$</a></span><ul class=\"toc-item\"><li><span><a href=\"#$V(x)-=-\\frac{1}{2}-x^2$-(Hermite-ensemble)\" data-toc-modified-id=\"$V(x)-=-\\frac{1}{2}-x^2$-(Hermite-ensemble)-2.1.1\"><span class=\"toc-item-num\">2.1.1 </span>$V(x) = \\frac{1}{2} x^2$ (Hermite ensemble)</a></span></li><li><span><a href=\"#$V(x)-=-\\frac{1}{4}-x^4$\" data-toc-modified-id=\"$V(x)-=-\\frac{1}{4}-x^4$-2.1.2\"><span class=\"toc-item-num\">2.1.2 </span>$V(x) = \\frac{1}{4} x^4$</a></span></li><li><span><a href=\"#$V(x)-=-\\frac{1}{6}-x^6$\" data-toc-modified-id=\"$V(x)-=-\\frac{1}{6}-x^6$-2.1.3\"><span class=\"toc-item-num\">2.1.3 </span>$V(x) = \\frac{1}{6} x^6$</a></span></li></ul></li><li><span><a href=\"#$V(x)-=-g_2-x^2-+-g_4-x^4$\" data-toc-modified-id=\"$V(x)-=-g_2-x^2-+-g_4-x^4$-2.2\"><span class=\"toc-item-num\">2.2 </span>$V(x) = g_2 x^2 + g_4 x^4$</a></span><ul class=\"toc-item\"><li><span><a href=\"#$V(x)=-\\frac{1}{4}-x^4-+-\\frac{1}{2}-x^2$\" data-toc-modified-id=\"$V(x)=-\\frac{1}{4}-x^4-+-\\frac{1}{2}-x^2$-2.2.1\"><span class=\"toc-item-num\">2.2.1 </span>$V(x)= \\frac{1}{4} x^4 + \\frac{1}{2} x^2$</a></span></li><li><span><a href=\"#$V(x)=-\\frac{1}{4}-x^4---x^2$-(onset-of-two-cut-solution)\" data-toc-modified-id=\"$V(x)=-\\frac{1}{4}-x^4---x^2$-(onset-of-two-cut-solution)-2.2.2\"><span class=\"toc-item-num\">2.2.2 </span>$V(x)= \\frac{1}{4} x^4 - x^2$ (onset of two-cut solution)</a></span></li><li><span><a href=\"#$V(x)=-\\frac{1}{4}-x^4---\\frac{5}{4}-x^2$-(Two-cut-eigenvalue-distribution)\" data-toc-modified-id=\"$V(x)=-\\frac{1}{4}-x^4---\\frac{5}{4}-x^2$-(Two-cut-eigenvalue-distribution)-2.2.3\"><span class=\"toc-item-num\">2.2.3 </span>$V(x)= \\frac{1}{4} x^4 - \\frac{5}{4} x^2$ (Two-cut eigenvalue distribution)</a></span></li></ul></li><li><span><a href=\"#$V(x)-=-\\frac{1}{20}-x^4---\\frac{4}{15}x^3-+-\\frac{1}{5}x^2-+-\\frac{8}{5}x$\" data-toc-modified-id=\"$V(x)-=-\\frac{1}{20}-x^4---\\frac{4}{15}x^3-+-\\frac{1}{5}x^2-+-\\frac{8}{5}x$-2.3\"><span class=\"toc-item-num\">2.3 </span>$V(x) = \\frac{1}{20} x^4 - \\frac{4}{15}x^3 + \\frac{1}{5}x^2 + \\frac{8}{5}x$</a></span></li></ul></li></ul></div>\n\nSummary\nWe focus on sampling $\\beta$-ensembles with $N$ points associated to polynomial potentials $V$ with even degree which take the form\n$$\n\\begin{equation}\n\\label{eq:potential_V}\nV(x) = \\frac{g_6}{6} x^6 \n + \\frac{g_4}{4} x^4 \n + \\frac{g_3}{3} x^3 \n + \\frac{g_2}{2} x^2\n + g_1 x\n\\end{equation}\n$$\nWe derive a fast but approximate procedure to generate samples from the targeted $\\beta$-ensemble. In other words, we wish to sample from $(x_{1},\\dots, x_N)$ with joint distribution proportional to \n$$\n\\begin{equation}\n\\label{eq:joint_x}\n\\left|\\Delta(x_1,\\dots,x_N)\\right|^{\\beta}\n ~ \\exp^{-\\sum_{i=1}^N V(x_i)}\n \\prod_{i=1}^N d x_i\n\\end{equation}\n$$\nwhere $\\Delta(x_1,\\dots,x_N) = \\prod_{1\\leq i < j \\leq N} (x_j-x_i)$.\nTo do this we view $x_1, \\dots, x_N$ as the eigenvalues of a random Jacobi matrix, i.e., a real-symmetric, tridiagonal matrix with positive subdiagonal\n$$\n\\begin{equation}\n\\label{eq:jacobi_matrix_J_N_a_b}\nJ_{ab}=\n\\begin{bmatrix}\n a_1 & \\sqrt{b_1}& 0 & 0 \\\n \\sqrt{b_1} & a_2 & \\ddots & 0 \\\n 0 & \\ddots & \\ddots & \\sqrt{b_{N-1}} \\\n 0 & 0 & \\sqrt{b_{N-1}} & a_{N} \n\\end{bmatrix}\n\\end{equation}.\n$$\n\nTo draw the correspondence with Jacobi matrices, we first augment the distribution of the points $x_1,\\dots,x_N$ with auxilary weights $w_{1}, \\dots, w_{N}$, distributed independently from the points as a Dirichlet $\\operatorname{Dir}\\left(\\frac{\\beta}{2}\\right)$, so that\n$$\n\\begin{equation}\n\\label{eq:joint_x_w}\n (x_{1:N}, w_{1:N-1})\n \\sim\n \\left|\\Delta(x_1,\\dots,x_N)\\right|^{\\beta}\n ~ \\exp^{- \\sum_{i=1}^N V(x_i)}\n \\prod_{i=1}^N d x_i\n \\prod_{i=1}^{N-1} w_i^{\\frac{\\beta}{2}-1} d w_i\n\\end{equation}\n$$\nThis allows to consider the random measure $\\mu = \\sum_{n=1}^N w_n \\delta_{x_n}$ supported on the targeted $\\beta$-ensemble.\nTaking for \nThe corresponding Jacobi matrix $J_{ab}$ characterizes the three-term recurrence relation between orthonormal polynomials w.r.t. $\\mu$ and Favard's theorem give that the mapping $\\mu \\mapsto J_{ab}$ is a diffeomorphism.\nWe can thus convert the distribution \\eqref{eq:joint_x_w} of the nodes and weights of $\\mu$ to the distribution of the entries of $J_{ab}$, namely\n$$\n\\begin{equation}\n\\label{eq:joint_a_b}\n (a_{1:N}, b_{1:N-1})\n \\sim\n \\prod_{i=1}^{N-1}\n b_{i}^{\\frac{\\beta}{2}(N-i)-1}\n \\exp^{-\\operatorname{Tr}[V(J_N(a, b)]}\n d a_{1:N}, b_{1:N-1}\n\\end{equation}\n$$\nsee, e.g., [KrRiVi13, Proposition 2].\n\nHence, computing the eigenvalues of the Jacobi matrix $J_{ab}$ \\eqref{eq:jacobi_matrix_J_N_a_b} with entries sampled from \\eqref{eq:joint_a_b} provides a way to sample the corresponding $\\beta$-ensemble \\eqref{eq:joint_x} with $\\mathcal{O}(N^2)$ complexity.\nThe question remains as to sample efficiently from \\eqref{eq:joint_a_b}.\nFor specific potentials, namely $V(x)=x^2, x - \\log(x), - \\log(1-x) + \\log(1+x)$ that are respectively associated to the Hermite, Laguerre and Jacobi ensembles, the stars align perfectly to make the entries of $J_{ab}$ independent with easy-to-sample distributions: Gaussian, Gamma, Beta.\nBut for more general potentials $V$, the Jacobi coefficients are no longer indenpendent and sampling from the distribution \\eqref{eq:joint_a_b} remains a challenge.\nYet, when considering polynomials potentials, the interaction between the different parameters $a_n, b_n$ has short range, driven by the degree of $V$.\nWe exploit this short range of interaction using a Gibbs sampler on Jacobi matrices to generate fast but approximate samples from the targeted $\\beta$ ensemble.\nAfter rescaling the polynomial potentials \\eqref{eq:potential_V} as $V\\leftarrow\\frac{\\beta N}{2} V$,\nfor the chain of Jacobi matrices, we compare the empirical distribution of\n- the eigenvalues to the expected equilibrium distribution\n- the largest eigenvalue to the Tracy-Widom distribution, as expected for some potentials $V$ when $\\beta=2$.\nOur empirical study seems to confirm the fast $\\log(N)$ mixing time sugested by [KrRiVi13, p.6].\nNote:\nIf you are interested in sampling exactly from the classical (Hermite, Laguerre and Jacobi) $\\beta$-ensembles, please refer to the corresponding section in the tutorial notebook of the DPPy toolbox.\nImports\nHere are the detailed INSTALLATION INSTRUCTIONS of DPPy\nTo install the last version available on PyPI you can use",
"# !pip install dppy",
"💣 Note: The version available on PyPI might not be the latest version of DPPy.\nPlease consider forking or cloning DPPy using",
"# !rm -r DPPy\n# !git clone https://github.com/guilgautier/DPPy.git\n# !pip install scipy --upgrade\n\n# Then\n# !pip install DPPy/. \n# OR\n# !pip install DPPy/.['zonotope','trees','docs'] to perform a full installation.",
"💣 If you have chosen to clone the repo and now wish to interact with the source code while running this notebook.\nYou can uncomment the following cell.",
"%load_ext autoreload\n%autoreload 2\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n%config InlineBackend.figure_format = 'retina'\n\nfrom dppy.beta_ensemble_polynomial_potential import BetaEnsemblePolynomialPotential",
"💻You can play with the various parameters, e.g., $N, \\beta, V$ nb_gibbs_passes 💻\n$V(x) = g_{2m} x^{2m}$\nWe first consider even monomial potentials whose equilibrium distribution can be derived from [Dei00, Proposition 6.156]\n$V(x) = \\frac{1}{2} x^2$ (Hermite ensemble)\nThis this the potential associated to the Hermite ensemble.\nIn this case, the Jacobi parameters are all independent and sampling is exact [DuEd02, II C].\nIn our setting, this corresponds to a single pass of the Gibbs sampler over each variable.",
"beta, V = 2, np.poly1d([0.5, 0, 0])\nbe = BetaEnsemblePolynomialPotential(beta, V)\n\nsampl_x2 = be.sample_mcmc(N=1000, nb_gibbs_passes=1,\n sample_exact_cond=True)\n\nbe.hist(sampl_x2)",
"$V(x) = \\frac{1}{4} x^4$\nTo depart from the classical quadratic potential we consider the quartic potential, which has been sampled by\n[LiMe13]\n[OlNaTr15]\n[ChFe19, Section 3.1]",
"beta, V = 2, np.poly1d([1/4, 0, 0, 0, 0])\nbe = BetaEnsemblePolynomialPotential(beta, V)\n\nsampl_x4 = be.sample_mcmc(N=200, nb_gibbs_passes=10,\n sample_exact_cond=True)\n# sample_exact_cond=False,\n# nb_mala_steps=100)\n\nbe.hist(sampl_x4)",
"$V(x) = \\frac{1}{6} x^6$\nThis is the first time the sextic ensemble is (approximately) sampled to the best of our knowledge.\nIn this case, the conditionals associated to the $a_n$ parameters are not $\\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA.\nFor this reason, we set sample_exact_cond=False.",
"beta, V = 2, np.poly1d([1/6, 0, 0, 0, 0, 0, 0])\nbe = BetaEnsemblePolynomialPotential(beta, V)\n\nsampl_x6 = be.sample_mcmc(N=200, nb_gibbs_passes=10,\n sample_exact_cond=False,\n nb_mala_steps=100)\n\nbe.hist(sampl_x6)",
"$V(x) = g_2 x^2 + g_4 x^4$\nWe consider quartic potentials where $g_2$ varies to reveal equilibrium distributions with support which are connected, about to be disconnect and fully disconnected.\nWe refer to\n[DuKu06, p.2-3],\n[Molinari, Example 3.3] and\n[LiMe13, Section 2]\nfor the exact shape of the corresponding equilibrium densities.\n$V(x)= \\frac{1}{4} x^4 + \\frac{1}{2} x^2$\nThis case reveals an equilibrium density with a connected support.",
"beta, V = 2, np.poly1d([1/4, 0, 1/2, 0, 0])\nbe = BetaEnsemblePolynomialPotential(beta, V)\n\nsampl_x4_x2 = be.sample_mcmc(N=1000, nb_gibbs_passes=10,\n sample_exact_cond=True)\n# sample_exact_cond=False,\n# nb_mala_steps=100)\n\nbe.hist(sampl_x4_x2)",
"$V(x)= \\frac{1}{4} x^4 - x^2$ (onset of two-cut solution)\nThis case reveal an equilibrium density with support which is about to be disconnected.\nThe conditionals associated to the $a_n$ parameters are not $\\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA.\nFor this reason, we set sample_exact_cond=False.",
"beta, V = 2, np.poly1d([1/4, 0, -1, 0, 0])\nbe = BetaEnsemblePolynomialPotential(beta, V)\n\nsampl_x4_x2_onset_2cut = be.sample_mcmc(N=1000, nb_gibbs_passes=10, \n sample_exact_cond=False,\n nb_mala_steps=100)\n\nbe.hist(sampl_x4_x2_onset_2cut)",
"$V(x)= \\frac{1}{4} x^4 - \\frac{5}{4} x^2$ (Two-cut eigenvalue distribution)\nThis case reveals an equilibrium density with support having two connected components.\nThe conditionals associated to the $a_n$ parameters are not $\\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA.\nFor this reason, we set sample_exact_cond=False.",
"beta, V = 2, np.poly1d([1/4, 0, -1.25, 0, 0])\nbe = BetaEnsemblePolynomialPotential(beta, V)\n\nsampl_x4_x2_2cut = be.sample_mcmc(N=200, nb_gibbs_passes=10,\n sample_exact_cond=False,\n nb_mala_steps=100)\n\nbe.hist(sampl_x4_x2_2cut)",
"$V(x) = \\frac{1}{20} x^4 - \\frac{4}{15}x^3 + \\frac{1}{5}x^2 + \\frac{8}{5}x$\nThis case reveals a singular behavior at the right edge of the support of the equilibrium density\nThe conditionals associated to the $a_n$ parameters are not $\\log$-concave and we do not support exact sampling but perform a few steps (100 by defaults) of MALA.\nFor this reason, we set sample_exact_cond=False.\nWe refer to [ClItsKr10, Example 1.2]\n[OlNaTr14, Section 3.2] for the expression of the corresponding equilibrium density.",
"beta, V = 2, np.poly1d([1/20, -4/15, 1/5, 8/5, 0])\nbe = BetaEnsemblePolynomialPotential(beta, V)\n\nsampl_x4_x3_x2_x = be.sample_mcmc(N=200, nb_gibbs_passes=10,\n sample_exact_cond=False,\n nb_mala_steps=100)\n\nbe.hist(sampl_x4_x3_x2_x)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
daphnei/nn_chatbot | homeworks/XOR/HW1_report.ipynb | mit | [
"Homework 2\nDaphne Ippolito",
"import xor_network",
"What issues did you have?\nThe first issue that I has was that I was trying to output a single scalar whose value could be thresholded to determine whether the network should return TRUE or FALSE. It turns out loss functions for this are much more complicated than if I had instead treated the XOR problem as a classification task with one output per possible label ('TRUE', 'FALSE'). This is the approach I have implemented here.\nAnother issue I encountered at first was that I was using too few hidden nodes. I originally thought that such a simple problem would only need a couple nodes in a single hidden layer to implement. However, such small networks were extremely slow to converge. This is exemplified in the Architectures section.\nLastly, when I was using small batch sizes (<= 5 examples), and randomly populating the batches, the network would sometimes fail to converge, probably because the batches didn't contain all the possible examples. \nWhich activation functions did you try? Which loss functions?\nI tried ReLU, sigmoid, and tanh activation functions. I only successfully uses a softmax cross-entropy loss function.\nThe results for the different activation functions can be seen by running the block below. The sigmoid function consistently takes the longest to converge. I'm unsure why tanh does significantly better than sigmoid.",
"batch_size = 100\nnum_steps = 10000\nnum_hidden = 7\nnum_hidden_layers = 2\nlearning_rate = 0.2\n\nxor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'sigmoid')\n\nxor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'tanh')\n\nxor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'relu')",
"What architectures did you try? What were the different results? How long did it take?\nThe results for several different architectures can be seen by running the code below. Since there is no reading from disk, each iteration takes almost exactly the same amount of time. Therefore, I will report \"how long it takes\" in number of iterations rather than in time.",
"# Network with 2 hidden layers of 5 nodes\nxor_network.run_network(batch_size, num_steps, 5, 2, learning_rate, False, 'relu')\n\n# Network with 5 hidden layers of 2 nodes each\nnum_steps = 3000 # (so it doesn't go on forever)\nxor_network.run_network(batch_size, num_steps, 2, 5, learning_rate, False, 'relu')",
"Conclusion from the above: With the number of parameters held constant, a deeper network does not necessarily perform better than a shallower one. I am guessing this is because fewer nodes in a layer means that the network can keep around less information from layer to layer.",
"xor_network.run_network(batch_size, num_steps, 3, 5, learning_rate, False, 'relu')",
"Conclusion from the above: Indeed, the problem is not the number of layers, but the number of nodes in each layer.",
"# This is the minimum number of nodes I can use to consistently get convergence with Gradient Descent.\nxor_network.run_network(batch_size, num_steps, 5, 1, learning_rate, False, 'relu')\n\n# If I switch to using Adam Optimizer, I can get down to 2 hidden nodes and consistently have convergence.\nxor_network.run_network(batch_size, num_steps, 2, 1, learning_rate, True, 'relu')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
astro4dev/OAD-Data-Science-Toolkit | Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/03. Numeros y jerarquía de operaciones.ipynb | gpl-3.0 | [
"Números y jerarquía de operaciones\n\nNúmeros enteros y flotantes\nJerarquía de operaciones\nAsignación de variables\n\nNúmeros enteros y flotantes\nCon los números se pueden realizar los siguientes tipos de operaciones:\n| Operación | Resultado |\n| --------- | --------------- |\n| + | Suma |\n| - | Resta |\n| * | Multiplicación |\n| / | División |\n| // | División entera |\n| ** | Potencia |\n| % | Residuo (modulo) |\nAlgunos sencillos ejemplos son:",
"2.**5\n\n2**5\n\n3/2\n\n3//2\n\n2//3\n\n21/3\n\n21//3\n\n21%3",
"¿Cuál es el resultado de cada una de las siguientes operaciones?\n\n18/4\n18//4\n18%4",
"18%4",
"Jerarquía de operaciones\n\nParéntesis\nExponenciación\nMultiplicación y División\nSumas y Restas (izquierda a derecha)",
"2 * (3-1) \n\n(1+1)**(5-2)\n\n2**1+1\n\n3*1**3\n\n2*3-1\n\n5-2*2\n\n6-3+2\n\n6-(3+2)\n\n100/100/2\n\n100/100*2\n\n100/(100*2)",
"¿Cuál es el valor de la siguiente expresión?\n16 - 2 * 5 // 3 + 1 \n\n(a) 14\n(b) 24\n(c) 3\n(d) 13.667 \n\nAsignación de variables",
"x = 15\ny = x\nx == y\n\nx = 22\nx==y\n\nx = x+1\nx\n\nx+=1\nx\n\nx-=20\nx",
"¿Qué aparece cuando se ejecuta la siguiente secuencia?\nx = 12\nx = x - 1\nprint(x)\n- (a) 12\n- (b) -1\n- (c) 11\n- (d) Aparece un error porque x no puede ser igual a x - 1.\n¿Qué aparece cuando se ejecuta la siguiente secuencia?\nx = 12\nx = x - 3\nx = x + 5\nx = x + 1\nprint(x)\n- (a) 12\n- (b) 9\n- (c) 15\n- (d) Aparece un error porque no es posible cambiar el valor de x tantas veces.\n¿En qué orden hay que poner estas instrucciones para que se muestre el número 2000?\n\n1) miplata=1500\n2) print(miplata)\n\n3) miplata+=500\n\n\n(a) 123\n\n(b) 321\n(c) 231\n(d) 132\n\n<img src=\"img/rock.png\">\nEl material de este notebook fue recopilado para Clubes de Ciencia Colombia 2017 por Luis Henry Quiroga (GitHub: lhquirogan) - Germán Chaparro (GitHub: saint-germain), y fue adaptado de https://github.com/PythonBootcampUniandes"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maxalbert/tohu | notebooks/v4/Primitive_generators.ipynb | mit | [
"Primitive generators\nThis notebook contains tests for tohu's primitive generators.",
"import tohu\nfrom tohu.v4.primitive_generators import *\nfrom tohu.v4.dispatch_generators import *\nfrom tohu.v4.utils import print_generated_sequence\n\nprint(f'Tohu version: {tohu.__version__}')",
"Constant\nConstant simply returns the same, constant value every time.",
"g = Constant('quux')\n\nprint_generated_sequence(g, num=10, seed=12345)",
"Boolean\nBoolean returns either True or False, optionally with different probabilities.",
"g1 = Boolean()\ng2 = Boolean(p=0.8)\n\nprint_generated_sequence(g1, num=20, seed=12345)\nprint_generated_sequence(g2, num=20, seed=99999)",
"Integer\nInteger returns a random integer between low and high (both inclusive).",
"g = Integer(low=100, high=200)\n\nprint_generated_sequence(g, num=10, seed=12345)",
"Float\nFloat returns a random float between low and high (both inclusive).",
"g = Float(low=2.3, high=4.2)\n\nprint_generated_sequence(g, num=10, sep='\\n', fmt='.12f', seed=12345)",
"HashDigest\nHashDigest returns hex strings representing hash digest values (or alternatively raw bytes).\nHashDigest hex strings (uppercase)",
"g = HashDigest(length=6)\n\nprint_generated_sequence(g, num=10, seed=12345)",
"HashDigest hex strings (lowercase)",
"g = HashDigest(length=6, uppercase=False)\n\nprint_generated_sequence(g, num=10, seed=12345)",
"HashDigest byte strings",
"g = HashDigest(length=10, as_bytes=True)\n\nprint_generated_sequence(g, num=5, seed=12345, sep='\\n')",
"NumpyRandomGenerator\nThis generator can produce random numbers using any of the random number generators supported by numpy.",
"g1 = NumpyRandomGenerator(method=\"normal\", loc=3.0, scale=5.0)\ng2 = NumpyRandomGenerator(method=\"poisson\", lam=30)\ng3 = NumpyRandomGenerator(method=\"exponential\", scale=0.3)\n\ng1.reset(seed=12345); print_generated_sequence(g1, num=4)\ng2.reset(seed=12345); print_generated_sequence(g2, num=15)\ng3.reset(seed=12345); print_generated_sequence(g3, num=4)",
"FakerGenerator\nFakerGenerator gives access to any of the methods supported by the faker module. Here are a couple of examples.\nExample: random names",
"g = FakerGenerator(method='name')\n\nprint_generated_sequence(g, num=8, seed=12345)",
"Example: random addresses",
"g = FakerGenerator(method='address')\n\nprint_generated_sequence(g, num=8, seed=12345, sep='\\n---\\n')",
"IterateOver\nIterateOver is a generator which simply iterates over a given sequence. Note that once the generator has been exhausted (by iterating over all its elements), it needs to be reset before it can produce elements again.",
"seq = ['a', 'b', 'c', 'd', 'e']\n\ng = IterateOver(seq)\n\ng.reset()\nprint([x for x in g])\nprint([x for x in g])\ng.reset()\nprint([x for x in g])",
"SelectOne",
"some_items = ['aa', 'bb', 'cc', 'dd', 'ee']\n\ng = SelectOne(some_items)\n\nprint_generated_sequence(g, num=30, seed=12345)",
"By default, all possible values are chosen with equal probability, but this can be changed by passing a distribution as the parameter p.",
"g = SelectOne(some_items, p=[0.1, 0.05, 0.7, 0.03, 0.12])\n\nprint_generated_sequence(g, num=30, seed=99999)",
"We can see that the item 'cc' has the highest chance of being selected (70%), followed by 'ee' and 'aa' (12% and 10%, respectively).\nTimestamp\nTimestamp produces random timestamps between a start and end time (both inclusive).",
"g = Timestamp(start='1998-03-01 00:02:00', end='1998-03-01 00:02:15')\n\nprint_generated_sequence(g, num=10, sep='\\n', seed=99999)",
"If start or end are dates of the form YYYY-MM-DD (without the exact HH:MM:SS timestamp), they are interpreted as start='YYYY-MM-DD 00:00:00 and end='YYYY-MM-DD 23:59:59', respectively - i.e., as the beginning and the end of the day.",
"g = Timestamp(start='2018-02-14', end='2018-02-18')\n\nprint_generated_sequence(g, num=5, sep='\\n', seed=12345)",
"For convenience, one can also pass a single date, which will produce timestamps during this particular date.",
"g = Timestamp(date='2018-01-01')\n\nprint_generated_sequence(g, num=5, sep='\\n', seed=12345)",
"Note that the generated items are datetime objects (even though they appear as strings when printed above).",
"g.reset(seed=12345)\n[next(g), next(g), next(g)]",
"We can use the .strftime() method to create another generator which returns timestamps as strings instead of datetime objects.",
"h = Timestamp(date='2018-01-01').strftime('%-d %b %Y, %H:%M (%a)')\n\nh.reset(seed=12345)\n[next(h), next(h), next(h)]",
"CharString",
"g = CharString(length=15)\nprint_generated_sequence(g, num=5, seed=12345)\nprint_generated_sequence(g, num=5, seed=99999)",
"It is possible to explicitly specify the character set.",
"g = CharString(length=12, charset=\"ABCDEFG\")\nprint_generated_sequence(g, num=5, sep='\\n', seed=12345)",
"There are also a few pre-defined character sets.",
"g1 = CharString(length=12, charset=\"<lowercase>\")\ng2 = CharString(length=12, charset=\"<alphanumeric_uppercase>\")\nprint_generated_sequence(g1, num=5, sep='\\n', seed=12345); print()\nprint_generated_sequence(g2, num=5, sep='\\n', seed=12345)",
"DigitString\nDigitString is the same as CharString with charset='0123456789'.",
"g = DigitString(length=15)\nprint_generated_sequence(g, num=5, seed=12345)\nprint_generated_sequence(g, num=5, seed=99999)",
"Sequential\nGenerates a sequence of sequentially numbered strings with a given prefix.",
"g = Sequential(prefix='Foo_', digits=3)",
"Calling reset() on the generator makes the numbering start from 1 again.",
"g.reset()\nprint_generated_sequence(g, num=5)\nprint_generated_sequence(g, num=5)\nprint()\ng.reset()\nprint_generated_sequence(g, num=5)",
"Note that the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here:",
"g.reset(seed=12345); print_generated_sequence(g, num=5)\ng.reset(seed=99999); print_generated_sequence(g, num=5)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fsilva/deputado-histogramado | notebooks/Deputado-Histogramado-3.ipynb | gpl-3.0 | [
"Deputado Histogramado\nexpressao.xyz/deputado/\nComo processar as sessões do parlamento Português\nÍndice\n\nReunír o dataset\nContando as palavras mais comuns\nFazendo histogramas\nRepresentações geograficas\nSimplificar o dataset e exportar para o expressa.xyz/deputado/\n\nO que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976? \nNeste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.\nPara obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.\nPara executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https://www.continuum.io/downloads)\nParte 3 - Fazendo Histogramas\nCódigo para carregar os dados do notebook anterior:",
"%matplotlib inline\nimport pylab\nimport matplotlib\nimport pandas\nimport numpy\n\n\ndateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')\nsessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)",
"Na parte 2 já ficamos a saber que 'Orçamento de/do Estado' não se usava antes de 1984, e se falava mais de decretos-lei antes de 1983. \nMas sinceramente não encontramos nada de interessante. Vamos acelerar o processo, e olhar para mais palavras:",
"# retorna o número de ocorrências de palavra em texto\ndef conta_palavra(texto,palavra):\n return texto.count(palavra)\n\n# retorna um vector com um item por sessao, e valor verdadeiro se o ano é =i, falso se nao é\ndef selecciona_ano(data,i):\n return data.map(lambda d: d.year == i)\n\n# faz o histograma do número de ocorrencias de 'palavra' por ano\ndef histograma_palavra(palavra):\n # cria uma coluna de tabela contendo as contagens de palavra por cada sessão\n dados = sessoes['sessao'].map(lambda texto: conta_palavra(texto,palavra.lower()))\n \n ocorrencias_por_ano = numpy.zeros(2016-1976)\n for i in range(0,2016-1976):\n # agrupa contagens por ano\n ocorrencias_por_ano[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])\n\n f = pylab.figure(figsize=(10,6)) \n ax = pylab.bar(range(1976,2016),ocorrencias_por_ano)\n pylab.xlabel('Ano')\n pylab.ylabel('Ocorrencias de '+str(palavra))\n\n \nimport time\nstart = time.time()\nhistograma_palavra('Paulo Portas') #já vimos que Paulo e Portas foram anormalmente frequentes em 2000, vamos ver se há mais eventos destes\nprint(str(time.time()-start)+' s') # mede o tempo que o código 'histograma_palavra('Paulo Portas')' demora a executar, para nossa referencia\n",
"Tal como tinhamos visto antes, o ano 2000 foi um ano bastante presente para o Paulo Portas. Parece que as suas contribuições vêm em ondas.",
"histograma_palavra('Crise')",
"Sempre se esteve em crise, mas em 2010 foi uma super-crise.",
"histograma_palavra('aborto')",
"Os debates sobre o aborto parecem estar bem localizados, a 1982, 1984, 1997/8 e 2005.",
"histograma_palavra('Euro')\n\nhistograma_palavra('Europa')\n\nhistograma_palavra('geringonça')\n\nhistograma_palavra('corrupção')\n\nhistograma_palavra('calúnia')",
"Saiu de moda.",
"histograma_palavra('iraque')\n\nhistograma_palavra('china')\n\nhistograma_palavra('alemanha')\n\nhistograma_palavra('brasil')\n\nhistograma_palavra('internet')\n\nhistograma_palavra('telemóvel')\n\nhistograma_palavra('redes sociais')\n\nhistograma_palavra('sócrates')\n\nhistograma_palavra('droga')\n\nhistograma_palavra('aeroporto')\n\nhistograma_palavra('hospital')\n\nhistograma_palavra('médicos')",
"e se quisermos acumular varias palavras no mesmo histograma?",
"def conta_palavras(texto,palavras):\n l = [texto.count(palavra.lower()) for palavra in palavras]\n return sum(l)\n\ndef selecciona_ano(data,i):\n return data.map(lambda d: d.year == i)\n\ndef histograma_palavras(palavras):\n dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras))\n\n ocorrencias_por_ano = numpy.zeros(2016-1976)\n for i in range(0,2016-1976):\n ocorrencias_por_ano[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])\n\n f = pylab.figure(figsize=(10,6)) \n ax = pylab.bar(range(1976,2016),ocorrencias_por_ano)\n pylab.xlabel('Ano')\n pylab.ylabel('Ocorrencias de '+str(palavras))\n\nhistograma_palavras(['escudos','contos','escudo'])\n\nhistograma_palavras(['muito bem','aplausos','fantastico','excelente','grandioso'])\n\nhistograma_palavras([' ecu ',' ecu.'])\n\nhistograma_palavra('União Europeia')\n\nhistograma_palavras(['CEE','Comunidade Económica Europeia'])",
"A União Europeia foi fundada em ~93 e a CEE integrada nesta (segundo a wikipedia), logo o gráfico faz sentido.\nVamos criar uma função para integrar os 2 graficos, para nos permitir comparar a evolução:",
"def conta_palavras(texto,palavras):\n l = [texto.count(palavra) for palavra in palavras]\n return sum(l)\n\ndef selecciona_ano(data,i):\n return data.map(lambda d: d.year == i)\n\n# calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico\ndef grafico_palavras_vs_palavras(palavras1, palavras2):\n palavras1 = [p.lower() for p in palavras1]\n palavras2 = [p.lower() for p in palavras2]\n dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras1))\n ocorrencias_por_ano1 = numpy.zeros(2016-1976)\n for i in range(0,2016-1976):\n ocorrencias_por_ano1[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])\n \n dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras2))\n ocorrencias_por_ano2 = numpy.zeros(2016-1976)\n for i in range(0,2016-1976):\n ocorrencias_por_ano2[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])\n\n anos = range(1976,2016)\n f = pylab.figure(figsize=(10,6)) \n p1 = pylab.bar(anos, ocorrencias_por_ano1)\n p2 = pylab.bar(anos, ocorrencias_por_ano2,bottom=ocorrencias_por_ano1)\n \n pylab.legend([palavras1[0], palavras2[0]])\n \n pylab.xlabel('Ano')\n pylab.ylabel('Ocorrencias totais')\n\ngrafico_palavras_vs_palavras(['CEE','Comunidade Económica Europeia'],['União Europeia'])",
"Boa, uma substitui a outra, basicamente.",
"grafico_palavras_vs_palavras(['contos','escudo'],['euro.','euro ','euros'])",
"Novamente, uma substitui a outra.",
"histograma_palavra('Troika')",
"Ok isto parece um mistério. Falava-se bastante mais da troika em 1989 do que 2011. Vamos investigar isto procurando e mostrando as frases onde as palavras aparecem.\nQueremos saber o que foi dito quando se mencionou 'Troika' no parlamento. Vamos tentar encontrar e imprimir as frases onde se dão as >70 ocorrencias de troika de 1989 e as 25 de 2011.",
"sessoes_1989 = sessoes[selecciona_ano(sessoes['data'],1989)]\nsessoes_2011 = sessoes[selecciona_ano(sessoes['data'],2011)]\n\ndef divide_em_frases(texto):\n return texto.replace('!','.').replace('?','.').split('.')\n\ndef acumula_lista_de_lista(l):\n return [j for x in l for j in x ]\n \ndef selecciona_frases_com_palavra(sessoes, palavra):\n frases_ = sessoes['sessao'].map(divide_em_frases)\n frases = acumula_lista_de_lista(frases_)\n return list(filter(lambda frase: frase.find(palavra) != -1, frases))\n\n\nfrases_com_troika1989 = selecciona_frases_com_palavra(sessoes_1989, 'troika')\nprint('Frases com troika em 1989: ' + str(len(frases_com_troika1989)))\nfrases_com_troika2011 = selecciona_frases_com_palavra(sessoes_2011, 'troika')\nprint('Frases com troika em 2011: ' + str(len(frases_com_troika2011)))\n\nfrom IPython.display import Markdown, display\n\n#print markdown permite-nos escrever a negrito ou como título\ndef print_markdown(string):\n display(Markdown(string))\n\ndef imprime_frases(lista_de_frases, palavra_negrito):\n for i in range(len(lista_de_frases)):\n string = lista_de_frases[i].replace(palavra_negrito,'**' + palavra_negrito + '**')\n #print_markdown(str(i+1) + ':' + string) \n print(str(i+1) + ':' + string) \n # no Jupyter notebooks 4.3.1 não se pode gravar output em markdown, tem de ser texto normal\n # se estiverem a executar o notebook e não a ler no github, podem descomentar a linha anterior para ver o texto com formatação\n#print_markdown('1989:\\n====')\nprint('1989:\\n====')\nimprime_frases(frases_com_troika1989[1:73:5],'troika')\n#print_markdown('2011:\\n====')\nprint('2011:\\n====')\nimprime_frases(frases_com_troika2011[1:20:2],'troika')\n",
"Como vemos na última frase, a verdade é que no parlmento se usa mais o termo 'Troica' do que 'Troika'! Na comunicação social usa-se muito 'Troika'.\nE para quem não sabe o que foi a perestroika: https://pt.wikipedia.org/wiki/Perestroika\nOk, assim já faz sentido:",
"def conta_palavras(texto,palavras):\n l = [texto.count(palavra) for palavra in palavras]\n return sum(l)\n\ndef selecciona_ano(data,i):\n return data.map(lambda d: d.year == i)\n\n# calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico\ndef grafico_palavras_vs_palavras(palavras1, palavras2):\n palavras1 = [p.lower() for p in palavras1]\n palavras2 = [p.lower() for p in palavras2]\n dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras1))\n ocorrencias_por_ano1 = numpy.zeros(2016-1976)\n for i in range(0,2016-1976):\n ocorrencias_por_ano1[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])\n \n dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras2))\n ocorrencias_por_ano2 = numpy.zeros(2016-1976)\n for i in range(0,2016-1976):\n ocorrencias_por_ano2[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])\n\n anos = range(1976,2016)\n f = pylab.figure(figsize=(10,6)) \n p1 = pylab.bar(anos, ocorrencias_por_ano1)\n p2 = pylab.bar(anos, ocorrencias_por_ano2,bottom=ocorrencias_por_ano1)\n \n pylab.legend([palavras1[0], palavras2[0]])\n \n pylab.xlabel('Ano')\n pylab.ylabel('Ocorrencias totais')\n\ngrafico_palavras_vs_palavras(['troica'],['troika'])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
TESScience/FPE_Test_Procedures | Evaluating Parameter Interdependence.ipynb | mit | [
"Evaluating Parameter Interdependence\nTest run on 10/29/15 by Ed Bokhour.\nUsing SD PCB Interface Board serial number 002, SD PCB Driver Board serial number 002, and SD PCB Video Board serial number 001. Running with new wrapper (FPE_Wrapper_6.1.2, for San Diego PCBs, dated 10/19/15.\nSet up the FPE\nRemember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.\nWhen you are running this notbook and it has not been power cycled, you should set preload=False.",
"from tessfpe.dhu.fpe import FPE\nfrom tessfpe.dhu.unit_tests import check_house_keeping_voltages\nimport time\nfpe1 = FPE(1, debug=False, preload=False, FPE_Wrapper_version='6.1.2')\nprint fpe1.version\ntime.sleep(.01)\nif check_house_keeping_voltages(fpe1):\n print \"Wrapper load complete. Interface voltages OK.\"",
"Set the operating parameters to the default values:",
"def set_fpe_defaults(fpe):\n \"Set the FPE to the default operating parameters, and outputs a table of the default values\"\n defaults = {}\n for k in range(len(fpe.ops.address)):\n if fpe.ops.address[k] is None:\n continue\n fpe.ops.address[k].value = fpe.ops.address[k].default\n defaults[fpe.ops.address[k].name] = fpe.ops.address[k].default\n return defaults",
"Get, sort, and print the default operating parameters:",
"from tessfpe.data.operating_parameters import operating_parameters\n\nfor k in sorted(operating_parameters.keys()):\n v = operating_parameters[k]\n print k, \":\", v[\"default\"], v[\"unit\"]",
"Take a number of sets of housekeeping data, with one operating parameter varying across it's control range, then repeat for every operating parameter:",
"def get_base_name(name):\n import re\n if '_offset' not in name:\n return None\n offset_name = name\n derived_parameter_name = name.replace('_offset', '')\n base_name = None\n if 'low' in derived_parameter_name:\n base_name = derived_parameter_name.replace('low', 'high') \n if 'high' in derived_parameter_name:\n base_name = derived_parameter_name.replace('high', 'low')\n if 'output_drain' in derived_parameter_name:\n base_name = re.sub(r'output_drain_._offset$', 'reset_drain', offset_name)\n return base_name\n\ndef get_derived_parameter_name(name):\n if '_offset' not in name:\n return None\n offset_name = name\n return name.replace('_offset', '')\n\ndata = {}\n\nbase_steps = 15\n\noffset_steps = 5\n\nset_fpe_defaults(fpe1)\nfor i in range(base_steps,0,-1):\n for j in range(offset_steps, 0, -1):\n for k in range(len(fpe1.ops.address)):\n # If there's no operating parameter to set, go on to the next one\n if fpe1.ops.address[k] is None:\n continue\n name = fpe1.ops.address[k].name\n base_name = get_base_name(name)\n derived_parameter_name = get_derived_parameter_name(name)\n # If there's no derived parameter reflecting this parameter, go on to the next one\n if derived_parameter_name is None:\n continue\n offset_name = name\n base_low = fpe1.ops[base_name].low\n base_high = fpe1.ops[base_name].high\n offset_low = fpe1.ops[offset_name].low\n offset_high = fpe1.ops[offset_name].high\n base_value = base_low + i / float(base_steps) * (base_high - base_low)\n fpe1.ops[base_name].value = base_value\n fpe1.ops[offset_name].value = offset_low + j / float(offset_steps) * (offset_high - offset_low)\n fpe1.ops.send()\n analogue_house_keeping = fpe1.house_keeping[\"analogue\"]\n for k in range(len(fpe1.ops.address)):\n # If there's no operating parameter to set, go on to the next one\n if fpe1.ops.address[k] is None:\n continue\n name = fpe1.ops.address[k].name\n base_name = get_base_name(name)\n derived_parameter_name = get_derived_parameter_name(name)\n if derived_parameter_name is None:\n continue\n if derived_parameter_name not in data:\n data[derived_parameter_name] = {}\n offset_name = name\n base_low = fpe1.ops[base_name].low\n base_high = fpe1.ops[base_name].high\n offset_low = fpe1.ops[offset_name].low\n offset_high = fpe1.ops[offset_name].high\n base_value = base_low + i / float(base_steps) * (base_high - base_low)\n if base_value not in data[derived_parameter_name]:\n data[derived_parameter_name][base_value] = {\"X\": [], \"Y\": []}\n data[derived_parameter_name][base_value][\"X\"].append(fpe1.ops[base_name].value + \n fpe1.ops[offset_name].value)\n data[derived_parameter_name][base_value][\"Y\"].append(analogue_house_keeping[derived_parameter_name])",
"Set up to plot:",
"%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pylab",
"Plot selected data:",
"def get_range_square(X,Y):\n return [min(X + Y)-1, max(X + Y)+1]\n\n# Plot the set vs. measured values of selected channels:\nfor nom in sorted(data.keys()):\n print nom\n for base_value in sorted(data[nom].keys()):\n print base_value\n X = data[nom][base_value][\"X\"]\n Y = data[nom][base_value][\"Y\"]\n ran = get_range_square(X,Y)\n pylab.ylim(ran)\n pylab.xlim(ran)\n pylab.grid(True)\n plt.axes().set_aspect(1)\n plt.title(\"{derived_param} with base {base}\".format(\n derived_param=nom,\n base=base_value\n ))\n plt.scatter(X,Y,color='red')\n plt.plot(X,Y,color='blue')\n plt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | stable/_downloads/82d9c13e00105df6fd0ebed67b862464/ssp_projs_sensitivity_map.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Sensitivity map of SSP projections\nThis example shows the sources that have a forward field\nsimilar to the first SSP vector correcting for ECG.",
"# Author: Alexandre Gramfort <[email protected]>\n#\n# License: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\n\nfrom mne import read_forward_solution, read_proj, sensitivity_map\n\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\nsubjects_dir = data_path / 'subjects'\nmeg_path = data_path / 'MEG' / 'sample'\nfname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'\necg_fname = meg_path / 'sample_audvis_ecg-proj.fif'\n\nfwd = read_forward_solution(fname)\n\nprojs = read_proj(ecg_fname)\n# take only one projection per channel type\nprojs = projs[::2]\n\n# Compute sensitivity map\nssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle')",
"Show sensitivity map",
"plt.hist(ssp_ecg_map.data.ravel())\nplt.show()\n\nargs = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7,\n hemi='rh', subjects_dir=subjects_dir)\nssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args)"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
sourabhrohilla/ds-masterclass-hands-on | session-2/python/Topic_Model_Recommender.ipynb | mit | [
"Topic Based Recommender\nTopic Based Recommender\n\nRepresent articles in terms of Topic Vector\nRepresent user in terms of Topic Vector of read articles\nCalculate cosine similarity between read and unread articles \nGet the recommended articles \n\nDescribing parameters:\n1. PATH_ARTICLE_TOPIC_DISTRIBUTION: specify the path where 'ARTICLE_TOPIC_DISTRIBUTION.csv' is present. <br/>\n2. PATH_NEWS_ARTICLES: specify the path where news_article.csv is present <br/>\n3. NO_OF_TOPIC: Number of topics specified when training your topic model. This would refer to the dimension of each vector representing an article <br/>\n4. ARTICLES_READ: List of Article_Ids read by the user <br/>\n5. NO_RECOMMENDED_ARTICLES: Refers to the number of recommended articles as a result",
"PATH_ARTICLE_TOPIC_DISTRIBUTION = \"/home/phoenix/Documents/HandsOn/Final/python/Topic Model/model/Article_Topic_Distribution.csv\"\nPATH_NEWS_ARTICLES = \"/home/phoenix/Documents/HandsOn/Final/news_articles.csv\"\nNO_OF_TOPICS=150\nARTICLES_READ=[7,6,76,61,761]\nNUM_RECOMMENDED_ARTICLES=5\n\nimport pandas as pd\nimport numpy as np\nfrom sklearn.metrics.pairwise import cosine_similarity",
"1. Represent Read Article in terms of Topic Vector",
"article_topic_distribution = pd.read_csv(PATH_ARTICLE_TOPIC_DISTRIBUTION)\narticle_topic_distribution.shape\n\narticle_topic_distribution.head()",
"Generate Article-Topic Distribution matrix",
"#Pivot the dataframe\narticle_topic_pivot = article_topic_distribution.pivot(index='Article_Id', columns='Topic_Id', values='Topic_Weight')\n#Fill NaN with 0\narticle_topic_pivot.fillna(value=0, inplace=True)\n#Get the values in dataframe as matrix\narticles_topic_matrix = article_topic_pivot.values\narticles_topic_matrix.shape\n\narticle_topic_pivot.head()",
"2. Represent user in terms of Topic Vector of read articles\nA user vector is represented in terms of average of read articles topic vector",
"#Select user in terms of read article topic distribution\nrow_idx = np.array(ARTICLES_READ)\nread_articles_topic_matrix=articles_topic_matrix[row_idx[:, None]]\n#Calculate the average of read articles topic vector \nuser_vector = np.mean(read_articles_topic_matrix, axis=0)\nuser_vector.shape\n\nuser_vector",
"3. Calculate cosine similarity between read and unread articles",
"def calculate_cosine_similarity(articles_topic_matrix, user_vector):\n articles_similarity_score=cosine_similarity(articles_topic_matrix, user_vector)\n recommended_articles_id = articles_similarity_score.flatten().argsort()[::-1]\n #Remove read articles from recommendations\n final_recommended_articles_id = [article_id for article_id in recommended_articles_id \n if article_id not in ARTICLES_READ ][:NUM_RECOMMENDED_ARTICLES]\n return final_recommended_articles_id\n\nrecommended_articles_id = calculate_cosine_similarity(articles_topic_matrix, user_vector)\nrecommended_articles_id",
"4. Recommendation Using Topic Model:-",
"#Recommended Articles and their title\nnews_articles = pd.read_csv(PATH_NEWS_ARTICLES)\nprint 'Articles Read'\nprint news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title']\nprint '\\n'\nprint 'Recommender '\nprint news_articles.loc[news_articles['Article_Id'].isin(recommended_articles_id)]['Title']",
"Topics + NER Recommender\nTopic + NER Based Recommender\n\nRepresent user in terms of - <br/>\n (Alpha) <Topic Vector> + (1-Alpha) <NER Vector> <br/>\n where <br/>\n Alpha => [0,1] <br/>\n [Topic Vector] => Topic vector representation of concatenated read articles <br/>\n [NER Vector] => Topic vector representation of NERs associated with concatenated read articles <br/>\nCalculate cosine similarity between user vector and articles Topic matrix\nGet the recommended articles",
"ALPHA = 0.5\nDICTIONARY_PATH = \"/home/phoenix/Documents/HandsOn/Final/python/Topic Model/model/dictionary_of_words.p\"\nLDA_MODEL_PATH = \"/home/phoenix/Documents/HandsOn/Final/python/Topic Model/model/lda.model\"\n\nfrom nltk import word_tokenize, pos_tag, ne_chunk\nfrom nltk.chunk import tree2conlltags\nimport re\nfrom nltk.corpus import stopwords\nfrom nltk.tokenize import TweetTokenizer\nfrom nltk.stem.snowball import SnowballStemmer\nimport pickle\nimport gensim\nfrom gensim import corpora, models",
"1. Represent User in terms of Topic Distribution and NER\n\nRepresent user in terms of read article topic distribution\nRepresent user in terms of NERs associated with read articles\n 2.1 Get NERs of read articles\n 2.2 Load LDA model\n 2.3 Get topic distribution for the concated NERs\nGenerate user vector\n\n1.1. Represent user in terms of read article topic distribution",
"row_idx = np.array(ARTICLES_READ)\nread_articles_topic_matrix=articles_topic_matrix[row_idx[:, None]]\n#Calculate the average of read articles topic vector \nuser_topic_vector = np.mean(read_articles_topic_matrix, axis=0)\nuser_topic_vector.shape",
"1.2. Represent user in terms of NERs associated with read articles",
"# Get NERs of read articles\ndef get_ner(article):\n ne_tree = ne_chunk(pos_tag(word_tokenize(article)))\n iob_tagged = tree2conlltags(ne_tree)\n ner_token = ' '.join([token for token,pos,ner_tag in iob_tagged if not ner_tag==u'O']) #Discarding tokens with 'Other' tag\n return ner_token\n\narticles = news_articles['Content'].tolist()\nuser_articles_ner = ' '.join([get_ner(articles[i]) for i in ARTICLES_READ])\nprint \"NERs of Read Article =>\", user_articles_ner\n\nstop_words = set(stopwords.words('english'))\ntknzr = TweetTokenizer()\nstemmer = SnowballStemmer(\"english\")\n\ndef clean_text(text):\n cleaned_text=re.sub('[^\\w_\\s-]', ' ', text) #remove punctuation marks \n return cleaned_text #and other symbols \n\ndef tokenize(text):\n word = tknzr.tokenize(text) #tokenization\n filtered_sentence = [w for w in word if not w.lower() in stop_words] #removing stop words\n stemmed_filtered_tokens = [stemmer.stem(plural) for plural in filtered_sentence] #stemming\n tokens = [i for i in stemmed_filtered_tokens if i.isalpha() and len(i) not in [0, 1]]\n return tokens\n\n#Cleaning the article\ncleaned_text = clean_text(user_articles_ner)\narticle_vocabulary = tokenize(cleaned_text)\n\n#Load model dictionary\nmodel_dictionary = pickle.load(open(DICTIONARY_PATH,\"rb\"))\n#Generate article maping using IDs associated with vocab\ncorpus = [model_dictionary.doc2bow(text) for text in [article_vocabulary]]\n\n#Load LDA Model\nlda = models.LdaModel.load(LDA_MODEL_PATH)\n\n# Get topic distribution for the concated NERs\narticle_topic_distribution=lda.get_document_topics(corpus[0])\narticle_topic_distribution\n\nner_vector =[0]*NO_OF_TOPICS\nfor topic_id, topic_weight in article_topic_distribution:\n ner_vector[topic_id]=topic_weight\nuser_ner_vector = np.asarray(ner_vector).reshape(1,150)",
"1.3. Generate user vector",
"alpha_topic_vector = ALPHA*user_topic_vector\nalpha_ner_vector = (1-ALPHA) * user_ner_vector\nuser_vector = np.add(alpha_topic_vector,alpha_ner_vector)\nuser_vector",
"2. Calculate cosine similarity between user vector and articles Topic matrix",
"recommended_articles_id = calculate_cosine_similarity(articles_topic_matrix, user_vector)\nrecommended_articles_id\n# [array([ 0.75807146]), array([ 0.74644157]), array([ 0.74440326]), array([ 0.7420562]), array([ 0.73966259])]",
"3. Get recommended articles",
"#Recommended Articles and their title\nnews_articles = pd.read_csv(PATH_NEWS_ARTICLES)\nprint 'Articles Read'\nprint news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title']\nprint '\\n'\nprint 'Recommender '\nprint news_articles.loc[news_articles['Article_Id'].isin(recommended_articles_id)]['Title']"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DSSG2017/florence | dev/notebooks/Distributions_MM.ipynb | mit | [
"Plotting distributions\nFirst, import relevant libraries:",
"import warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"Then, load the data (takes a few moments):",
"# Load data\nuda = pd.read_csv(\"./aws-data/user_dist.txt\", sep=\"\\t\") # User distribution, all\nudf = pd.read_csv(\"./aws-data/user_dist_fl.txt\", sep=\"\\t\") # User distribution, Florence\n\ndra = pd.read_csv(\"./aws-data/user_duration.txt\", sep=\"\\t\") # Duration, all\ndrf = pd.read_csv(\"./aws-data/user_duration_fl.txt\", sep=\"\\t\") # Duration, Florence\n\ndra['min'] = pd.to_datetime(dra['min'], format='%Y-%m-%d%H:%M:%S')\ndra['max'] = pd.to_datetime(dra['max'], format='%Y-%m-%d%H:%M:%S')\ndrf['min'] = pd.to_datetime(drf['min'], format='%Y-%m-%d%H:%M:%S')\ndrf['max'] = pd.to_datetime(drf['max'], format='%Y-%m-%d%H:%M:%S')\n\ndra['duration'] = dra['max'] - dra['min']\ndrf['duration'] = drf['max'] - drf['min']\n\ndra['days'] = dra['duration'].dt.days\ndrf['days'] = drf['duration'].dt.days\n\ncda = pd.read_csv(\"./aws-data/calls_per_day.txt\", sep=\"\\t\") # Calls per day, all\ncdf = pd.read_csv(\"./aws-data/calls_per_day_fl.txt\", sep=\"\\t\") # Calls per day, Florence\ncda['day_'] = pd.to_datetime(cda['day_'], format='%Y-%m-%d%H:%M:%S').dt.date\ncdf['day_'] = pd.to_datetime(cdf['day_'], format='%Y-%m-%d%H:%M:%S').dt.date\n\ncda.head()\n\nmcpdf = cdf.groupby('cust_id')['count'].mean().to_frame() # Mean calls per day, Florence\nmcpdf.columns = ['mean_calls_per_day']\nmcpdf = mcpdf.sort_values('mean_calls_per_day',ascending=False)\nmcpdf.index.name = 'cust_id'\nmcpdf.reset_index(inplace=True)\nmcpdf.head()\n\n# mcpdf.plot(y='mean_calls_per_day', style='.', logy=True, figsize=(10,10))\nmcpdf.plot.hist(y='mean_calls_per_day', logy=True, figsize=(10,10), bins=100)\nplt.ylabel('Number of customers with x average calls per day')\n# plt.xlabel('Customer rank')\nplt.title('Mean number of calls per day during days in Florence by foreign SIM cards')\n\ncvd = udf.merge(drf, left_on='cust_id', right_on='cust_id', how='outer') # Count versus days\ncvd.plot.scatter(x='days', y='count', s=.1, figsize = (10, 10))\nplt.ylabel('Number of calls')\nplt.xlabel('Duration between first and last days active')\nplt.title('Calls versus duration of records of foreign SIMs in Florence')\n\nfr = drf['days'].value_counts().to_frame() # NOTE: FIGURE OUT HOW TO ROUND, NOT TRUNCATE\nfr.columns = ['frequency']\nfr.index.name = 'days'\nfr.reset_index(inplace=True)\nfr = fr.sort_values('days')\nfr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum()",
"The code below creates a calls-per-person frequency distribution, which is the first thing we want to see.",
"fr.plot(x='days', y='frequency', style='o-', logy=True, figsize = (10, 10))\nplt.ylabel('Number of people')\nplt.axvline(14,ls='dotted')\nplt.title('Foreign SIM days between first and last instances in Florence')\n\ncvd = udf.merge(drf, left_on='cust_id', right_on='cust_id', how='outer') # Count versus days\ncvd.plot.scatter(x='days', y='count', s=.1, figsize = (10, 10))\nplt.ylabel('Number of calls')\nplt.xlabel('Duration between first and last days active')\nplt.title('Calls versus duration of records of foreign SIMs in Florence')",
"Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.",
"fr = udf['count'].value_counts().to_frame()\nfr.columns = ['frequency']\nfr.index.name = 'calls'\nfr.reset_index(inplace=True)\nfr = fr.sort_values('calls')\nfr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum()\nfr.head()\n\nfr.plot(x='calls', y='frequency', style='o-', logx=True, figsize = (10, 10))\n# plt.axvline(5,ls='dotted')\nplt.ylabel('Number of people')\nplt.title('Number of people placing or receiving x number of calls over 4 months')",
"It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.",
"fr.plot(x='calls', y='cumulative', style='o-', logx=True, figsize = (10, 10))\nplt.axhline(1.0,ls='dotted',lw=.5)\nplt.axhline(.90,ls='dotted',lw=.5)\nplt.axhline(.75,ls='dotted',lw=.5)\nplt.axhline(.67,ls='dotted',lw=.5)\nplt.axhline(.50,ls='dotted',lw=.5)\nplt.axhline(.33,ls='dotted',lw=.5)\nplt.axhline(.25,ls='dotted',lw=.5)\nplt.axhline(.10,ls='dotted',lw=.5)\nplt.axhline(0.0,ls='dotted',lw=.5)\nplt.axvline(max(fr['calls'][fr['cumulative']<.90]),ls='dotted',lw=.5)\nplt.ylabel('Cumulative fraction of people')\nplt.title('Cumulative fraction of people placing or receiving x number of calls over 4 months')",
"We also want to look at the number of unique lat-long addresses, which will (roughly) correspond to either where cell phone towers are, and/or the level of truncation. This takes too long in pandas, so we use postgres, piping the results of the query,\n\\o towers_with_counts.txt\nselect lat, lon, count(*) as calls, count(distinct cust_id) as users, count(distinct date_trunc('day', date_time_m) ) as days from optourism.cdr_foreigners group by lat, lon order by calls desc;\n\\q\ninto the file towers_with_counts.txt. This is followed by the bash command\ncat towers_with_counts.txt | sed s/\\ \\|\\ /'\\t'/g | sed s/\\ //g | sed 2d > towers_with_counts2.txt\nto clean up the postgres output format.",
"df2 = pd.read_table(\"./aws-data/towers_with_counts2.txt\")\ndf2.head()",
"Do the same thing as above.",
"fr2 = df2['count'].value_counts().to_frame()\nfr2.columns = ['frequency']\nfr2.index.name = 'count'\nfr2.reset_index(inplace=True)\nfr2 = fr2.sort_values('count')\nfr2['cumulative'] = fr2['frequency'].cumsum()/fr2['frequency'].sum()\nfr2.head()\n\nfr2.plot(x='count', y='frequency', style='o-', logx=True, figsize = (10, 10))\n# plt.axvline(5,ls='dotted')\nplt.ylabel('Number of cell towers')\nplt.title('Number of towers with x number of calls placed or received over 4 months')",
"Unlike the previous plot, this is not very clean at all, making the cumulative distribution plot critical.",
"fr2.plot(x='count', y='cumulative', style='o-', logx=True, figsize = (10, 10))\nplt.axhline(0.1,ls='dotted',lw=.5)\nplt.axvline(max(fr2['count'][fr2['cumulative']<.10]),ls='dotted',lw=.5)\nplt.axhline(0.5,ls='dotted',lw=.5)\nplt.axvline(max(fr2['count'][fr2['cumulative']<.50]),ls='dotted',lw=.5)\nplt.axhline(0.9,ls='dotted',lw=.5)\nplt.axvline(max(fr2['count'][fr2['cumulative']<.90]),ls='dotted',lw=.5)\nplt.ylabel('Cumulative fraction of cell towers')\nplt.title('Cumulative fraction of towers with x number of calls placed or received over 4 months')",
"Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component.",
"df['datetime'] = pd.to_datetime(df['date_time_m'], format='%Y-%m-%d %H:%M:%S')\ndf['date'] = df['datetime'].dt.floor('d') # Faster than df['datetime'].dt.date\n\ndf2 = df.groupby(['cust_id','date']).size().to_frame()\ndf2.columns = ['count']\ndf2.index.name = 'date'\ndf2.reset_index(inplace=True)\ndf2.head(20)\n\ndf3 = (df2.groupby('cust_id')['date'].max() - df2.groupby('cust_id')['date'].min()).to_frame()\ndf3['calls'] = df2.groupby('cust_id')['count'].sum()\ndf3.columns = ['days','calls']\ndf3['days'] = df3['days'].dt.days\ndf3.head()\n\nfr = df['cust_id'].value_counts().to_frame()['cust_id'].value_counts().to_frame()\n\n# plt.scatter(np.log(df3['days']), np.log(df3['calls']))\n# plt.show()\n\nfr.plot(x='calls', y='freq', style='o', logx=True, logy=True)\n\nx=np.log(fr['calls'])\ny=np.log(1-fr['freq'].cumsum()/fr['freq'].sum())\nplt.plot(x, y, 'r-')\n\n# How many home_Regions\nnp.count_nonzero(data['home_region'].unique())\n\n# How many customers\nnp.count_nonzero(data['cust_id'].unique())\n\n# How many Nulls are there in the customer ID column?\ndf['cust_id'].isnull().sum()\n\n# How many missing data are there in the customer ID?\nlen(df['cust_id']) - df['cust_id'].count()\n\ndf['cust_id'].unique()\n\ndata_italians = pd.read_csv(\"./aws-data/firence_italians_3days_past_future_sample_1K_custs.csv\", header=None)\ndata_italians.columns = ['lat', 'lon', 'date_time_m', 'home_region', 'cust_id', 'in_florence']\nregions = np.array(data_italians['home_region'].unique())\nregions\n\n'Sardegna' in data['home_region']"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jgarciab/wwd2017 | class4/class4_timeseries.ipynb | gpl-3.0 | [
"Working with data 2017. Class 4\nContact\nJavier Garcia-Bernardo\[email protected]\n0. Structure\n\nStats\nDefinitions\nWhat's a p-value?\nOne-tailed test vs two-tailed test\nCount vs expected count (binomial test)\nIndependence between factors: ($\\chi^2$ test) \n\n\nIn-class exercises to melt, pivot, concat, merge, groupby and plot.\nRead data from websited\nTime series",
"import pandas as pd\nimport numpy as np\nimport pylab as plt\nimport seaborn as sns\nfrom scipy.stats import chi2_contingency,ttest_ind\n\n#This allows us to use R\n%load_ext rpy2.ipython\n\n#Visualize in line\n%matplotlib inline\n\n\n#Be able to plot images saved in the hard drive\nfrom IPython.display import Image,display\n\n#Make the notebook wider\nfrom IPython.core.display import display, HTML \ndisplay(HTML(\"<style>.container { width:90% !important; }</style>\"))\n",
"3. Read tables from websites\npandas is cool\n- Use pd.read_html(url)\n- It returns a list of all tables in the website\n- It tries to guess the encoding of the website, but with no much success.",
"df = pd.read_html(\"https://piie.com/summary-economic-sanctions-episodes-1914-2006\",encoding=\"UTF-8\")\nprint(type(df),len(df))\ndf\n\ndf[0].head(10)\n\ndf[0].columns\n\ndf = pd.read_html(\"https://piie.com/summary-economic-sanctions-episodes-1914-2006\",encoding=\"UTF-8\")\ndf = df[0]\nprint(df.columns)\ndf.columns = ['Year imposed', 'Year ended', 'Principal sender',\n 'Target country', 'Policy goal',\n 'Success score (scale 1 to 16)',\n 'Cost to target (percent of GNP)']\n\ndf = df.replace('negligible', 0) \ndf = df.replace(\"–\",\"-\",regex=True) #the file uses long dashes\ndf.to_csv(\"data/economic_sanctions.csv\",index=None,sep=\"\\t\")\n\ndf = pd.read_csv(\"data/economic_sanctions.csv\",sep=\"\\t\",na_values=[\"-\",\"Ongoing\"])\ndf[\"Duration\"] = df[\"Year ended\"] - df[\"Year imposed\"]\ndf.head()\n\nsns.lmplot(x=\"Duration\",y=\"Cost to target (percent of GNP)\",data=df,fit_reg=False,hue=\"Year imposed\",legend=False,palette=\"YlOrBr\")\nplt.ylim((-2,10))\nplt.legend(loc=\"center left\", bbox_to_anchor=(1, 0.5),ncol=4)\n",
"4. Parse dates\npandas is cool\n- Use parse_dates=[columns] when reading the file\n- It parses the date\n4.1. Use parse_dates when reading the file",
"df = pd.read_csv(\"data/exchange-rate-twi-may-1970-aug-1.tsv\",sep=\"\\t\",parse_dates=[\"Month\"],skipfooter=2)\ndf.head()",
"4.2. You can now filter by date",
"#filter by time\ndf_after1980 = df.loc[df[\"Month\"] > \"1980-05-02\"] #year-month-date\ndf_after1980.columns = [\"Date\",\"Rate\"]\ndf_after1980.head()",
"4.3. And still extract columns of year and month",
"#make columns with year and month (useful for models)\ndf_after1980[\"Year\"] = df_after1980[\"Date\"].apply(lambda x: x.year)\ndf_after1980[\"Month\"] = df_after1980[\"Date\"].apply(lambda x: x.month)\ndf_after1980.head()",
"4.4. You can resample the data with a specific frequency\n\nVery similar to groupby.\nGroups the data with a specific frequency\n\"A\" = End of year\n\"B\" = Business day\nothers: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases\n\n\nThen you tell pandas to apply a function to the group (mean/max/median...)",
"#resample\ndf_after1980_resampled = df_after1980.resample(\"A\",on=\"Date\").mean()\ndisplay(df_after1980_resampled.head())\n\ndf_after1980_resampled = df_after1980_resampled.reset_index()\ndf_after1980_resampled.head()",
"4.5 And of course plot it with a line plot",
"#Let's visualize it\nplt.figure(figsize=(6,4))\nplt.plot(df_after1980[\"Date\"],df_after1980[\"Rate\"],label=\"Before resampling\")\nplt.plot(df_after1980_resampled[\"Date\"],df_after1980_resampled[\"Rate\"],label=\"After resampling\")\nplt.xlabel(\"Time\")\nplt.ylabel(\"Rate\")\nplt.legend()\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MartyWeissman/Python-for-number-theory | P3wNT Notebook 3.ipynb | gpl-3.0 | [
"Part 3: Lists and the sieve of Eratosthenes in Python 3.x\nPython provides a powerful set of tools to create and manipulate lists of data. In this part, we take a deep dive into the Python list type. We use Python lists to implement and optimize the Sieve of Eratosthenes, which will produce a list of all prime numbers up to a big number (like 10 million) in a snap. Along the way, we introduce some Python techniques for mathematical functions and data analysis. This programming lesson is meant to complement Chapter 2 of An Illustrated Theory of Numbers, and mathematical background can be found there.\nTable of Contents\n\nPrimality test\nList manipulation\nThe sieve\nData analysis\n\n<a id='primetest'></a>\nPrimality testing\nBefore diving into lists, we recall the brute force primality test that we created in the last lesson. To test whether a number n is prime, we can simply check for factors. This yields the following primality test.",
"def is_prime(n):\n '''\n Checks whether the argument n is a prime number.\n Uses a brute force search for factors between 1 and n.\n '''\n for j in range(2,n): # the range of numbers 2,3,...,n-1.\n if n%j == 0: # is n divisible by j?\n print(\"{} is a factor of {}.\".format(j,n))\n return False\n return True",
"We can also implement this test with a while loop instead of a for loop. This doesn't make much of a difference, in Python 3.x. (In Python 2.x, this would save memory).",
"def is_prime(n):\n '''\n Checks whether the argument n is a prime number.\n Uses a brute force search for factors between 1 and n.\n '''\n j = 2\n while j < n: # j will proceed through the list of numbers 2,3,...,n-1.\n if n%j == 0: # is n divisible by j?\n print(\"{} is a factor of {}.\".format(j,n))\n return False\n j = j + 1 # There's a Python abbreviation for this: j += 1.\n return True\n\nis_prime(10001)\n\nis_prime(101)",
"If $n$ is a prime number, then the is_prime(n) function will iterate through all the numbers between $2$ and $n-1$. But this is overkill! Indeed, if $n$ is not prime, it will have a factor between $2$ and the square root of $n$. This is because factors come in pairs: if $ab = n$, then one of the factors, $a$ or $b$, must be less than or equal to the square root of $n$. So it suffices to search for factors up to (and including) the square root of $n$.\nWe haven't worked with square roots in Python yet. But Python comes with a standard math package which enables square roots, trig functions, logs, and more. Click the previous link for documentation. This package doesn't load automatically when you start Python, so you have to load it with a little Python code.",
"from math import sqrt",
"This command imports the square root function (sqrt) from the package called math. Now you can find square roots.",
"sqrt(1000)",
"There are a few different ways to import functions from packages. The above syntax is a good starting point, but sometimes problems can arise if different packages have functions with the same name. Here are a few methods of importing the sqrt function and how they differ.\nfrom math import sqrt: After this command, sqrt will refer to the function from the math package (overriding any previous definition).\nimport math: After this command, all the functions from the math package will be imported. But to call sqrt, you would type a command like math.sqrt(1000). This is convenient if there are potential conflicts with other packages.\nfrom math import *: After this command, all the functions from the math package will be imported. To call them, you can access them directly with a command like sqrt(1000). This can easily cause conflicts with other packages, since packages can have hundreds of functions in them!\nimport math as mth: Some people like abbreviations. This imports all the functions from the math package. To call one, you type a command like mth.sqrt(1000).",
"import math\n\nmath.sqrt(1000)\n\nfactorial(10) # This will cause an error!\n\nmath.factorial(10) # This is ok, since the math package comes with a function called factorial.",
"Now let's improve our is_prime(n) function by searching for factors only up to the square root of the number n. We consider two options.",
"def is_prime_slow(n):\n '''\n Checks whether the argument n is a prime number.\n Uses a brute force search for factors between 1 and n.\n '''\n j = 2\n while j <= sqrt(n): # j will proceed through the list of numbers 2,3,... up to sqrt(n).\n if n%j == 0: # is n divisible by j?\n print(\"{} is a factor of {}.\".format(j,n))\n return False\n j = j + 1 # There's a Python abbreviation for this: j += 1.\n return True\n\ndef is_prime_fast(n):\n '''\n Checks whether the argument n is a prime number.\n Uses a brute force search for factors between 1 and n.\n '''\n j = 2\n root_n = sqrt(n)\n while j <= root_n: # j will proceed through the list of numbers 2,3,... up to sqrt(n).\n if n%j == 0: # is n divisible by j?\n print(\"{} is a factor of {}.\".format(j,n))\n return False\n j = j + 1 # There's a Python abbreviation for this: j += 1.\n return True\n\nis_prime_fast(1000003)\n\nis_prime_slow(1000003)",
"I've chosen function names with \"fast\" and \"slow\" in them. But what makes them faster or slower? Are they faster than the original? And how can we tell?\nPython comes with a great set of tools for these questions. The simplest (for the user) are the time utilities. By placing the magic %timeit before a command, Python does something like the following:\n\nPython makes a little container in your computer devoted to the computations, to avoid interference from other running programs if possible.\nPython executes the command lots and lots of times.\nPython averages the amount of time taken for each execution. \n\nGive it a try below, to compare the speed of the functions is_prime (the original) with the new is_prime_fast and is_prime_slow. Note that the %timeit commands might take a little while.",
"%timeit is_prime_fast(1000003)\n\n%timeit is_prime_slow(1000003)\n\n%timeit is_prime(1000003)",
"Time is measured in seconds, milliseconds (1 ms = 1/1000 second), microseconds (1 µs = 1/1,000,000 second), and nanoseconds (1 ns = 1/1,000,000,000 second). So it might appear at first that is_prime is the fastest, or about the same speed. But check the units! The other two approaches are about a thousand times faster! How much faster were they on your computer?",
"is_prime_fast(10000000000037) # Don't try this with `is_prime` unless you want to wait for a long time!",
"Indeed, the is_prime_fast(n) function will go through a loop of length about sqrt(n) when n is prime. But is_prime(n) will go through a loop of length about n. Since sqrt(n) is much less than n, especially when n is large, the is_prime_fast(n) function is much faster.\nBetween is_prime_fast and is_prime_slow, the difference is that the fast version precomputes the square root sqrt(n) before going through the loop, where the slow version repeats the sqrt(n) every time the loop is repeated. Indeed, writing while j <= sqrt(n): suggests that Python might execute sqrt(n) every time to check. This might lead to Python computing the same square root a million times... unnecessarily! \nA basic principle of programming is to avoid repetition. If you have the memory space, just compute once and store the result. It will probably be faster to pull the result out of memory than to compute it again.\nPython does tend to be pretty smart, however. It's possible that Python is precomputing sqrt(n) even in the slow loop, just because it's clever enough to tell in advance that the same thing is being computed over and over again. This depends on your Python version and takes place behind the scenes. If you want to figure it out, there's a whole set of tools (for advanced programmers) like the disassembler to figure out what Python is doing.",
"is_prime_fast(10**14 + 37) # This might get a bit of delay.",
"Now we have a function is_prime_fast(n) that is speedy for numbers n in the trillions! You'll probably start to hit a delay around $10^{15}$ or so, and the delays will become intolerable if you add too many more digits. In a future lesson, we will see a different primality test that will be essentially instant even for numbers around $10^{1000}$! \nExercises\n\n\nTo check whether a number n is prime, you can first check whether n is even, and then check whether n has any odd factors. Change the is_prime_fast function by implementing this improvement. How much of a speedup did you get?\n\n\nUse the %timeit tool to study the speed of is_prime_fast for various sizes of n. Using 10-20 data points, make a graph relating the size of n to the time taken by the is_prime_fast function.\n\n\nWrite a function is_square(n) to test whether a given integer n is a perfect square (like 0, 1, 4, 9, 16, etc.). How fast can you make it run? Describe the different approaches you try and which are fastest.\n\n\n<a id='lists'></a>\nList manipulation\nWe have already (briefly) encountered the list type in Python. Recall that the range command produces a range, which can be used to produce a list. For example, list(range(10)) produces the list [0,1,2,3,4,5,6,7,8,9]. You can also create your own list by a writing out its terms, e.g. L = [4,7,10].\nHere we work with lists, and a very Pythonic approach to list manipulation. With practice, this can be a powerful tool to write fast algorithms, exploiting the hard-wired capability of your computer to shift and slice large chunks of data. Our application will be to implement the Sieve of Eratosthenes, producing a long list of prime numbers (without using any is_prime test along the way).\nWe begin by creating two lists to play with.",
"L = [0,'one',2,'three',4,'five',6,'seven',8,'nine',10]",
"List terms and indices\nNotice that the entries in a list can be of any type. The above list L has some integer entries and some string entries. Lists are ordered in Python, starting at zero. One can access the $n^{th}$ entry in a list with a command like L[n].",
"L[3]\n\nprint(L[3]) # Note that Python has slightly different approaches to the print-function, and the output above.\n\nprint(L[4]) # We will use the print function, because it makes our printing intentions clear.\n\nprint(L[0])",
"The location of an entry is called its index. So at the index 3, the list L stores the entry three. Note that the same entry can occur in many places in a list. E.g. [7,7,7] is a list with 7 at the zeroth, first, and second index.",
"print(L[-1])\nprint(L[-2])",
"The last bit of code demonstrates a cool Python trick. The \"-1st\" entry in a list refers to the last entry. The \"-2nd entry\" refers to the second-to-last entry, and so on. It gives a convenient way to access both sides of the list, even if you don't know how long it is.\nOf course, you can use Python to find out how long a list is.",
"len(L)",
"You can also use Python to find the sum of a list of numbers.",
"sum([1,2,3,4,5])\n\nsum(range(100)) # Be careful. This is the sum of which numbers? # The sum function can take lists or ranges.",
"List slicing\nSlicing lists allows us to create new lists (or ranges) from old lists (or ranges), by chopping off one end or the other, or even slicing out entries at a fixed interval. The simplest syntax has the form L[a:b] where a denotes the index of the starting entry and index of the final entry is one less than b. It is best to try a few examples to get a feel for it.\nSlicing a list with a command like L[a:b] doesn't actually change the original list L. It just extracts some terms from the list and outputs those terms. Soon enough, we will change the list L using a list assignment.",
"L[0:5]\n\nL[5:11] # Notice that L[0:5] and L[5:11] together recover the whole list.\n\nL[3:7]",
"This continues the strange (for beginners) Python convention of starting at the first number and ending just before the last number. Compare to range(3,7), for example. \nThe command L[0:5] can be replaced by L[:5] to abbreviate. The empty opening index tells Python to start at the beginning. Similarly, the command L[5:11] can be replaced by L[5:]. The empty closing index tells Python to end the slice and the end. This is helpful if one doesn't know where the list ends.",
"L[:5]\n\nL[3:]",
"Just like the range command, list slicing can take an optional third argument to give a step size. To understand this, try the command below.",
"L[2:10]\n\nL[2:10:3]",
"If, in this three-argument syntax, the first or second argument is absent, then the slice starts at the beginning of the list or ends at the end of the list accordingly.",
"L # Just a reminder. We haven't modified the original list!\n\nL[:9:3] # Start at zero, go up to (but not including) 9, by steps of 3.\n\nL[2: :3] # Start at two, go up through the end of the list, by steps of 3.\n\nL[::3] # Start at zero, go up through the end of the list, by steps of 3.",
"Changing list slices\nNot only can we extract and study terms or slices of a list, we can change them by assignment. The simplest case would be changing a single term of a list.",
"print(L) # Start with the list L.\n\nL[5] = 'Bacon!'\n\nprint(L) # What do you think L is now?\n\nprint(L[2::3]) # What do you think this will do?",
"We can change an entire slice of a list with a single assignment. Let's change the first two terms of L in one line.",
"L[:2] = ['Pancakes', 'Ham'] # What was L[:2] before?\n\nprint(L) # Oh... what have we done!\n\nL[0]\n\nL[1]\n\nL[2]",
"We can change a slice of a list with a single assignment, even when that slice does not consist of consecutive terms. Try to predict what the following commands will do.",
"print(L) # Let's see what the list looks like before.\n\nL[::2] = ['A','B','C','D','E','F'] # What was L[::2] before this assignment? \n\nprint(L) # What do you predict?",
"Exercises\n\n\nCreate a list L with L = [1,2,3,...,100] (all the numbers from 1 to 100). What is L[50]?\n\n\nTake the same list L, and extract a slice of the form [5,10,15,...,95] with a command of the form L[a:b:c].\n\n\nTake the same list L, and change all the even numbers to zeros, so that L looks like [1,0,3,0,5,0,...,99,0]. Hint: You might wish to use the list [0]*50.\n\n\nTry the command L[-1::-1] on a list. What does it do? Can you guess before executing it? Can you understand why? In fact, strings are lists too. Try setting L = 'Hello' and the previous command.\n\n\n<a id='sieve'></a>\nSieve of Eratosthenes\nThe Sieve of Eratosthenes (hereafter called \"the sieve\") is a very fast way of producing long lists of primes, without doing repeated primality checking. It is described in more detail in Chapter 2 of An Illustrated Theory of Numbers. The basic idea is to start with all of the natural numbers, and successively filter out, or sieve, the multiples of 2, then the multiples of 3, then the multiples of 5, etc., until only primes are left.\nUsing list slicing, we can carry out this sieving process efficiently. And with a few more tricks we encounter here, we can carry out the Sieve very efficiently. \nThe basic sieve\nThe first approach we introduce is a bit naive, but is a good starting place. We will begin with a list of numbers up to 100, and sieve out the appropriate multiples of 2,3,5,7.",
"primes = list(range(100)) # Let's start with the numbers 0...99.",
"Now, to \"filter\", i.e., to say that a number is not prime, let's just change the number to the value None.",
"primes[0] = None # Zero is not prime.\nprimes[1] = None # One is not prime.\nprint(primes) # What have we done?",
"Now let's filter out the multiples of 2, starting at 4. This is the slice primes[4::2]",
"primes[4::2] = [None] * len(primes[4::2]) # The right side is a list of Nones, of the necessary length.\nprint(primes) # What have we done?",
"Now we filter out the multiples of 3, starting at 9.",
"primes[9::3] = [None] * len(primes[9::3]) # The right side is a list of Nones, of the necessary length.\nprint(primes) # What have we done?",
"Next the multiples of 5, starting at 25 (the first multiple of 5 greater than 5 that's left!)",
"primes[25::5] = [None] * len(primes[25::5]) # The right side is a list of Nones, of the necessary length.\nprint(primes) # What have we done?",
"Finally, the multiples of 7, starting at 49 (the first multiple of 7 greater than 7 that's left!)",
"primes[49::7] = [None] * len(primes[49::7]) # The right side is a list of Nones, of the necessary length.\nprint(primes) # What have we done?",
"What's left? A lot of Nones and the prime numbers up to 100. We have successfully sieved out all the nonprime numbers in the list, using just four sieving steps (and setting 0 and 1 to None manually). \nBut there's a lot of room for improvement, from beginning to end!\n\nThe format of the end result is not so nice.\nWe had to sieve each step manually. It would be much better to have a function prime_list(n) which would output a list of primes up to n without so much supervision.\nThe memory usage will be large, if we need to store all the numbers up to a large n at the beginning.\n\nWe solve these problems in the following way.\n\nWe will use a list of booleans rather than a list of numbers. The ending list will have a True value at prime indices and a False value at composite indices. This reduces the memory usage and increases the speed. \nA which function (explained soon) will make the desired list of primes after everything else is done.\nWe will proceed through the sieving steps algorithmically rather than entering each step manually.\n\nHere is a somewhat efficient implementation of the Sieve in Python.",
"def isprime_list(n):\n ''' \n Return a list of length n+1\n with Trues at prime indices and Falses at composite indices.\n '''\n flags = [True] * (n+1) # A list [True, True, True,...] to start.\n flags[0] = False # Zero is not prime. So its flag is set to False.\n flags[1] = False # One is not prime. So its flag is set to False.\n p = 2 # The first prime is 2. And we start sieving by multiples of 2.\n \n while p <= sqrt(n): # We only need to sieve by p is p <= sqrt(n).\n if flags[p]: # We sieve the multiples of p if flags[p]=True.\n flags[p*p::p] = [False] * len(flags[p*p::p]) # Sieves out multiples of p, starting at p*p.\n p = p + 1 # Try the next value of p.\n \n return flags\n\nprint(isprime_list(100))",
"If you look carefully at the list of booleans, you will notice a True value at the 2nd index, the 3rd index, the 5th index, the 7th index, etc.. The indices where the values are True are precisely the prime indices. Since booleans take the smallest amount of memory of any data type (one bit of memory per boolean), your computer can carry out the isprime_list(n) function even when n is very large.\nTo be more precise, there are 8 bits in a byte. There are 1024 bytes (about 1000) in a kilobyte. There are 1024 kilobytes in a megabyte. There are 1024 megabytes in a gigabyte. Therefore, a gigabyte of memory is enough to store about 8 billion bits. That's enough to store the result of isprime_list(n) when n is about 8 billion. Not bad! And your computer probably has 4 or 8 or 12 or 16 gigabytes of memory to use.\nTo transform the list of booleans into a list of prime numbers, we create a function called where. This function uses another Python technique called list comprehension. We discuss this technique later in this lesson, so just use the where function as a tool for now, or read about list comprehension if you're curious.",
"def where(L):\n '''\n Take a list of booleans as input and\n outputs the list of indices where True occurs.\n '''\n return [n for n in range(len(L)) if L[n]]\n ",
"Combined with the isprime_list function, we can produce long lists of primes.",
"print(where(isprime_list(100)))",
"Let's push it a bit further. How many primes are there between 1 and 1 million? We can figure this out in three steps:\n\nCreate the isprime_list.\nUse where to get the list of primes.\nFind the length of the list of primes.\n\nBut it's better to do it in two steps.\n\nCreate the isprime_list.\nSum the list! (Note that True is 1, for the purpose of summation!)",
"sum(isprime_list(1000000)) # The number of primes up to a million!\n\n%timeit isprime_list(10**6) # 1000 ms = 1 second.\n\n%timeit sum(isprime_list(10**6))",
"This isn't too bad! It takes a fraction of a second to identify the primes up to a million, and a smaller fraction of a second to count them! But we can do a little better. \nThe first improvement is to take care of the even numbers first. If we count carefully, then the sequence 4,6,8,...,n (ending at n-1 if n is odd) has the floor of (n-2)/2 terms. Thus the line flags[4::2] = [False] * ((n-2)//2) will set all the flags to False in the sequence 4,6,8,10,... From there, we can begin sieving by odd primes starting with 3.\nThe next improvement is that, since we've already sieved out all the even numbers (except 2), we don't have to sieve out again by even multiples. So when sieving by multiples of 3, we don't have to sieve out 9,12,15,18,21,etc.. We can just sieve out 9,15,21,etc.. When p is an odd prime, this can be taken care of with the code flags[p*p::2*p] = [False] * len(flags[p*p::2*p]).",
"def isprime_list(n):\n ''' \n Return a list of length n+1\n with Trues at prime indices and Falses at composite indices.\n '''\n flags = [True] * (n+1) # A list [True, True, True,...] to start.\n flags[0] = False # Zero is not prime. So its flag is set to False.\n flags[1] = False # One is not prime. So its flag is set to False.\n flags[4::2] = [False] * ((n-2)//2)\n p = 3\n while p <= sqrt(n): # We only need to sieve by p is p <= sqrt(n).\n if flags[p]: # We sieve the multiples of p if flags[p]=True.\n flags[p*p::2*p] = [False] * len(flags[p*p::2*p]) # Sieves out multiples of p, starting at p*p.\n p = p + 2 # Try the next value of p. Note that we can proceed only through odd p!\n \n return flags\n\n%timeit sum(isprime_list(10**6)) # How much did this speed it up?",
"Another modest improvement is the following. In the code above, the program counts the terms in sequences like 9,15,21,27,..., in order to set them to False. This is accomplished with the length command len(flags[p*p::2*p]). But that length computation is a bit too intensive. A bit of algebraic work shows that the length is given formulaically in terms of p and n by the formula: \n$$len = \\lfloor \\frac{n - p^2 - 1}{2p} \\rfloor + 1$$\n(Here $\\lfloor x \\rfloor$ denotes the floor function, i.e., the result of rounding down.) Putting this into the code yields the following.",
"def isprime_list(n):\n ''' \n Return a list of length n+1\n with Trues at prime indices and Falses at composite indices.\n '''\n flags = [True] * (n+1) # A list [True, True, True,...] to start.\n flags[0] = False # Zero is not prime. So its flag is set to False.\n flags[1] = False # One is not prime. So its flag is set to False.\n flags[4::2] = [False] * ((n-2)//2)\n p = 3\n while p <= sqrt(n): # We only need to sieve by p is p <= sqrt(n).\n if flags[p]: # We sieve the multiples of p if flags[p]=True.\n flags[p*p::2*p] = [False] * ((n-p*p-1)//(2*p)+1) # Sieves out multiples of p, starting at p*p.\n p = p + 2 # Try the next value of p.\n \n return flags\n\n%timeit sum(isprime_list(10**6)) # How much did this speed it up?",
"That should be pretty fast! It should be under 100 ms (one tenth of one second!) to determine the primes up to a million, and on a newer computer it should be under 50ms. We have gotten pretty close to the fastest algorithms that you can find in Python, without using external packages (like SAGE or sympy). See the related discussion on StackOverflow... the code in this lesson was influenced by the code presented there.\nExercises\n\n\nProve that the length of range(p*p, n, 2*p) equals $\\lfloor \\frac{n - p^2 - 1}{2p} \\rfloor + 1$.\n\n\nA natural number $n$ is called squarefree if it has no perfect square divides $n$ except for 1. Write a function squarefree_list(n) which outputs a list of booleans: True if the index is squarefree and False if the index is not squarefree. For example, if you execute squarefree_list(12), the output should be [False, True, True, True, False, True, True, True, False, False, True, True, False]. Note that the False entries are located the indices 0, 4, 8, 9, 12. These natural numbers have perfect square divisors besides 1. \n\n\nYour DNA contains about 3 billion base pairs. Each \"base pair\" can be thought of as a letter, A, T, G, or C. How many bits would be required to store a single base pair? In other words, how might you convert a sequence of booleans into a letter A,T,G, or C? Given this, how many megabytes or gigabytes are required to store your DNA? How many people's DNA would fit on a thumb-drive?\n\n\n<a id='analysis'></a>\nData analysis\nNow that we can produce a list of prime numbers quickly, we can do some data analysis: some experimental number theory to look for trends or patterns in the sequence of prime numbers. Since Euclid (about 300 BCE), we have known that there are infinitely many prime numbers. But how are they distributed? What proportion of numbers are prime, and how does this proportion change over different ranges? As theoretical questions, these belong the the field of analytic number theory. But it is hard to know what to prove without doing a bit of experimentation. And so, at least since Gauss (read Tschinkel's article about Gauss's tables) started examining his extensive tables of prime numbers, mathematicians have been carrying out experimental number theory.\nAnalyzing the list of primes\nLet's begin by creating our data set: the prime numbers up to 1 million.",
"primes = where(isprime_list(1000000))\n\nlen(primes) # Our population size. A statistician might call it N.\n\nprimes[-1] # The last prime in our list, just before one million.\n\ntype(primes) # What type is this data?\n\nprint(primes[:100]) # The first hundred prime numbers.",
"To carry out serious analysis, we will use the method of list comprehension to place our population into \"bins\" for statistical analysis. Our first type of list comprehension has the form [x for x in LIST if CONDITION]. This produces the list of all elements of LIST satisfying CONDITION. It is similar to list slicing, except we pull out terms from the list according to whether a condition is true or false.\nFor example, let's divide the (odd) primes into two classes. Red primes will be those of the form 4n+1. Blue primes will be those of the form 4n+3. In other words, a prime p is red if p%4 == 1 and blue if p%4 == 3. And the prime 2 is neither red nor blue.",
"redprimes = [p for p in primes if p%4 == 1] # Note the [x for x in LIST if CONDITION] syntax.\nblueprimes = [p for p in primes if p%4 == 3]\n\nprint('Red primes:',redprimes[:20]) # The first 20 red primes.\nprint('Blue primes:',blueprimes[:20]) # The first 20 blue primes.\n\nprint(\"There are {} red primes and {} blue primes, up to 1 million.\".format(len(redprimes), len(blueprimes)))",
"This is pretty close! It seems like prime numbers are about evenly distributed between red and blue. Their remainder after division by 4 is about as likely to be 1 as it is to be 3. In fact, it is proven that asymptotically the ratio between the number of red primes and the number of blue primes approaches 1. However, Chebyshev noticed a persistent slight bias towards blue primes along the way.\nSome of the deepest conjectures in mathematics relate to the prime counting function $\\pi(x)$. Here $\\pi(x)$ is the number of primes between 1 and $x$ (inclusive). So $\\pi(2) = 1$ and $\\pi(3) = 2$ and $\\pi(4) = 2$ and $\\pi(5) = 3$. One can compute a value of $\\pi(x)$ pretty easily using a list comprehension.",
"def primes_upto(x):\n return len([p for p in primes if p <= x]) # List comprehension recovers the primes up to x.\n\nprimes_upto(1000) # There are 168 primes between 1 and 1000.",
"Now we graph the prime counting function. To do this, we use a list comprehension, and the visualization library called matplotlib. For graphing a function, the basic idea is to create a list of x-values, a list of corresponding y-values (so the lists have to be the same length!), and then we feed the two lists into matplotlib to make the graph.\nWe begin by loading the necessary packages.",
"import matplotlib # A powerful graphics package.\nimport numpy # A math package\nimport matplotlib.pyplot as plt # A plotting subpackage in matplotlib.",
"Now let's graph the function $y = x^2$ over the domain $-2 \\leq x \\leq 2$ for practice. As a first step, we use numpy's linspace function to create an evenly spaced set of 11 x-values between -2 and 2.",
"x_values = numpy.linspace(-2,2,11) # The argument 11 is the *number* of terms, not the step size!\nprint(x_values)\ntype(x_values)",
"You might notice that the format looks a bit different from a list. Indeed, if you check type(x_values), it's not a list but something else called a numpy array. Numpy is a package that excels with computations on large arrays of data. On the surface, it's not so different from a list. The numpy.linspace command is a convenient way of producing an evenly spaced list of inputs.\nThe big difference is that operations on numpy arrays are interpreted differently than operations on ordinary Python lists. Try the two commands for comparison.",
"[1,2,3] + [1,2,3]\n\nx_values + x_values\n\ny_values = x_values * x_values # How is multiplication interpreted on numpy arrays?\nprint(y_values)",
"Now we use matplotlib to create a simple line graph.",
"%matplotlib inline\nplt.plot(x_values, y_values)\nplt.title('The graph of $y = x^2$') # The dollar signs surround the formula, in LaTeX format.\nplt.ylabel('y')\nplt.xlabel('x')\nplt.grid(True)\nplt.show()\n",
"Let's analyze the graphing code a bit more. See the official pyplot tutorial for more details.\npython\n%matplotlib inline\nplt.plot(x_values, y_values)\nplt.title('The graph of $y = x^2$') # The dollar signs surround the formula, in LaTeX format.\nplt.ylabel('y')\nplt.xlabel('x')\nplt.grid(True)\nplt.show()\nThe first line contains the magic %matplotlib inline. We have seen a magic word before, in %timeit. Magic words can call another program to assist. So here, the magic %matplotlib inline calls matplotlib for help, and places the resulting figure within the notebook.\nThe next line plt.plot(x_values, y_values) creates a plot object based on the data of the x-values and y-values. It is an abstract sort of object, behind the scenes, in a format that matplotlib understands. The following lines set the title of the plot, the axis labels, and turns a grid on. The last line plt.show renders the plot as an image in your notebook. There's an infinite variety of graphs that matplotlib can produce -- see the gallery for more! Other graphics packages include bokeh and seaborn, which extends matplotlib.\nAnalysis of the prime counting function\nNow, to analyze the prime counting function, let's graph it. To make a graph, we will first need a list of many values of x and many corresponding values of $\\pi(x)$. We do this with two commands. The first might take a minute to compute.",
"x_values = numpy.linspace(0,1000000,1001) # The numpy array [0,1000,2000,3000,...,1000000]\npix_values = numpy.array([primes_upto(x) for x in x_values]) # [FUNCTION(x) for x in LIST] syntax",
"We created an array of x-values as before. But the creation of an array of y-values (here, called pix_values to stand for $\\pi(x)$) probably looks strange. We have done two new things!\n\nWe have used a list comprehension [primes_upto(x) for x in x_values] to create a list of y-values.\nWe have used numpy.array(LIST) syntax to convert a Python list into a numpy array.\n\nFirst, we explain the list comprehension. Instead of pulling out values of a list according to a condition, with [x for x in LIST if CONDITION], we have created a new list based on performing a function each element of a list. The syntax, used above, is [FUNCTION(x) for x in LIST]. These two methods of list comprehension can be combined, in fact. The most general syntax for list comprehension is [FUNCTION(x) for x in LIST if CONDITION].\nSecond, a list comprehension can be carried out on a numpy array, but the result is a plain Python list. It will be better to have a numpy array instead for what follows, so we use the numpy.array() function to convert the list into a numpy array.",
"type(numpy.array([1,2,3])) # For example.",
"Now we have two numpy arrays: the array of x-values and the array of y-values. We can make a plot with matplotlib.",
"len(x_values) == len(pix_values) # These better be the same, or else matplotlib will be unhappy.\n\n%matplotlib inline\nplt.plot(x_values, pix_values)\nplt.title('The prime counting function')\nplt.ylabel('$\\pi(x)$')\nplt.xlabel('x')\nplt.grid(True)\nplt.show()",
"In this range, the prime counting function might look nearly linear. But if you look closely, there's a subtle downward bend. This is more pronounced in smaller ranges. For example, let's look at the first 10 x-values and y-values only.",
"%matplotlib inline\nplt.plot(x_values[:10], pix_values[:10]) # Look closer to 0.\nplt.title('The prime counting function')\nplt.ylabel('$\\pi(x)$')\nplt.xlabel('x')\nplt.grid(True)\nplt.show()",
"It still looks almost linear, but there's a visible downward bend here. How can we see this bend more clearly? If the graph were linear, its equation would have the form $\\pi(x) = mx$ for some fixed slope $m$ (since the graph does pass through the origin). Therefore, the quantity $\\pi(x)/x$ would be constant if the graph were linear. \nHence, if we graph $\\pi(x) / x$ on the y-axis and $x$ on the x-axis, and the result is nonconstant, then the function $\\pi(x)$ is nonlinear.",
"m_values = pix_values[1:] / x_values[1:] # We start at 1, to avoid a division by zero error.\n\n%matplotlib inline\nplt.plot(x_values[1:], m_values)\nplt.title('The ratio $\\pi(x) / x$ as $x$ varies.')\nplt.xlabel('x')\nplt.ylabel('$\\pi(x) / x$')\nplt.grid(True)\nplt.show()",
"That is certainly not constant! The decay of $\\pi(x) / x$ is not so different from $1 / \\log(x)$, in fact. To see this, let's overlay the graphs. We use the numpy.log function, which computes the natural logarithm of its input (and allows an entire array as input).",
"%matplotlib inline\nplt.plot(x_values[1:], m_values, label='$\\pi(x)/x$') # The same as the plot above.\nplt.plot(x_values[1:], 1 / numpy.log(x_values[1:]), label='$1 / \\log(x)$') # Overlay the graph of 1 / log(x)\nplt.title('The ratio of $\\pi(x) / x$ as $x$ varies.')\nplt.xlabel('x')\nplt.ylabel('$\\pi(x) / x$')\nplt.grid(True)\nplt.legend() # Turn on the legend.\nplt.show()",
"The shape of the decay of $\\pi(x) / x$ is very close to $1 / \\log(x)$, but it looks like there is an offset. In fact, there is, and it is pretty close to $1 / \\log(x)^2$. And that is close, but again there's another little offset, this time proportional to $2 / \\log(x)^3$. This goes on forever, if one wishes to approximate $\\pi(x) / x$ by an \"asymptotic expansion\" (not a good idea, it turns out).\nThe closeness of $\\pi(x) / x$ to $1 / \\log(x)$ is expressed in the prime number theorem:\n$$\\lim_{x \\rightarrow \\infty} \\frac{\\pi(x)}{x / \\log(x)} = 1.$$",
"%matplotlib inline\nplt.plot(x_values[1:], m_values * numpy.log(x_values[1:]) ) # Should get closer to 1.\nplt.title('The ratio $\\pi(x) / (x / \\log(x))$ approaches 1... slowly')\nplt.xlabel('x')\nplt.ylabel('$\\pi(x) / (x / \\log(x)) $')\nplt.ylim(0.8,1.2)\nplt.grid(True)\nplt.show()",
"Comparing the graph to the theoretical result, we find that the ratio $\\pi(x) / (x / \\log(x))$ approaches $1$ (the theoretical result) but very slowly (see the graph above!).\nA much stronger result relates $\\pi(x)$ to the \"logarithmic integral\" $li(x)$. The Riemann hypothesis is equivalent to the statement\n$$\\left\\vert \\pi(x) - li(x) \\right\\vert = O(\\sqrt{x} \\log(x)).$$\nIn other words, the error if one approximates $\\pi(x)$ by $li(x)$ is bounded by a constant times $\\sqrt{x} \\log(x)$. The logarithmic integral function isn't part of Python or numpy, but it is in the mpmath package. If you have this package installed, then you can try the following.",
"from mpmath import li\n\nprint(primes_upto(1000000)) # The number of primes up to 1 million.\nprint(li(1000000)) # The logarithmic integral of 1 million.",
"Not too shabby!\nPrime gaps\nAs a last bit of data analysis, we consider the prime gaps. These are the numbers that occur as differences between consecutive primes. Since all primes except 2 are odd, all prime gaps are even except for the 1-unit gap between 2 and 3. There are many unsolved problems about prime gaps; the most famous might be that a gap of 2 occurs infinitely often (as in the gaps between 3,5 and between 11,13 and between 41,43, etc.).\nOnce we have our data set of prime numbers, it is not hard to create a data set of prime gaps. Recall that primes is our list of prime numbers up to 1 million.",
"len(primes) # The number of primes up to 1 million.\n\nprimes_allbutlast = primes[:-1] # This excludes the last prime in the list.\nprimes_allbutfirst = primes[1:] # This excludes the first (i.e., with index 0) prime in the list.\n\nprimegaps = numpy.array(primes_allbutfirst) - numpy.array(primes_allbutlast) # Numpy is fast!\n\nprint(primegaps[:100]) # The first hundred prime gaps!",
"What have we done? It is useful to try out this method on a short list.",
"L = [1,3,7,20] # A nice short list.\n\nprint(L[:-1])\nprint(L[1:])",
"Now we have two lists of the same length. The gaps in the original list L are the differences between terms of the same index in the two new lists. One might be tempted to just subtract, e.g., with the command L[1:] - L[:-1], but subtraction is not defined for lists.\nFortunately, by converting the lists to numpy arrays, we can use numpy's term-by-term subtraction operation.",
"L[1:] - L[:-1] # This will give a TypeError. You can't subtract lists!\n\nnumpy.array(L[1:]) - numpy.array(L[:-1]) # That's better. See the gaps in the list [1,3,7,20] in the output.",
"Now let's return to our primegaps data set. It contains all the gap-sizes for primes up to 1 million.",
"print(len(primes))\nprint(len(primegaps)) # This should be one less than the number of primes.",
"As a last example of data visualization, we use matplotlib to produce a histogram of the prime gaps.",
"max(primegaps) # The largest prime gap that appears!\n\n%matplotlib inline\nplt.figure(figsize=(12, 5)) # Makes the resulting figure 12in by 5in.\nplt.hist(primegaps, bins=range(1,115)) # Makes a histogram with one bin for each possible gap from 1 to 114.\nplt.ylabel('Frequency')\nplt.xlabel('Gap size')\nplt.grid(True)\nplt.title('The frequency of prime gaps, for primes up to 1 million')\nplt.show()",
"Observe that gaps of 2 (twin primes) are pretty frequent. There are over 8000 of them, and about the same number of 4-unit gaps! But gaps of 6 are most frequent in the population, and there are some interesting peaks at 6, 12, 18, 24, 30. What else do you observe?\nExercises\n\n\nCreate functions redprimes_upto(x) and blueprimes_upto(x) which count the number of red/blue primes up to a given number x. Recall that we defined red/blue primes to be those of the form 4n+1 or 4n+3, respectively. Graph the relative proportion of red/blue primes as x varies from 1 to 1 million. E.g., are the proportions 50%/50% or 70%/30%, and how do these proportions change? Note: this is also visualized in An Illustrated Theory of Numbers and you can read an article by Rubinstein and Sarnak for more.\n\n\nDoes there seem to be a bias in the last digits of primes? Note that, except for 2 and 5, every prime ends in 1,3,7, or 9. Note: the last digit of a number n is obtained from n % 10. \n\n\nRead about the \"Prime Conspiracy\", recently discovered by Lemke Oliver and Soundararajan. Can you detect their conspiracy in our data set of primes?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kit-cel/wt | SC468/LDPC_Optimization_AWGN.ipynb | gpl-2.0 | [
"Optimization of Degree Distributions on the AWGN\nThis code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.\nThis code illustrates\n* Using linear programming to optimize degree distributions on the AWGN channel using EXIT charts",
"import numpy as np\nimport matplotlib.pyplot as plot\nfrom ipywidgets import interactive\nimport ipywidgets as widgets\nimport math\nfrom pulp import *\n%matplotlib inline ",
"Approximation of the J-function taken from [1] with\n$$\nJ(\\mu) \\approx \\left(1 - 2^{-H_1\\cdot (2\\mu)^{H_2}}\\right)^{H_3}\n$$\nand its inverse function can be easily found as\n$$\n\\mu = J^{-1}(I) \\approx \\frac{1}{2}\\left(-\\frac{1}{H_1}\\log_2\\left(1-I^{\\frac{1}{H_3}}\\right)\\right)^{\\frac{1}{H_2}}\n$$\nwith $H_1 = 0.3073$, $H_2=0.8935$, and $H_3 = 1.1064$.\n[1] F. Schreckenbach, Iterative Decoding of Bit-Interleaved Coded Modulation , PhD thesis, TU Munich, 2007",
"H1 = 0.3073\nH2 = 0.8935\nH3 = 1.1064\n\ndef J_fun(mu): \n I = (1 - 2**(-H1*(2*mu)**H2))**H3\n return I\n\ndef invJ_fun(I):\n if I > (1-1e-10):\n return 100\n mu = 0.5*(-(1/H1) * np.log2(1 - I**(1/H3)))**(1/H2)\n return mu",
"The following function solves the optimization problem that returns the best $\\lambda(Z)$ for a given BI-AWGN channel quality $E_s/N_0$, corresponding to a $\\mu_c = 4\\frac{E_s}{N_0}$, for a regular check node degree $d_{\\mathtt{c}}$, and for a maximum variable node degree $d_{\\mathtt{v},\\max}$. This optimization problem is derived in the lecture as\n$$\n\\begin{aligned}\n& \\underset{\\lambda_1,\\ldots,\\lambda_{d_{\\mathtt{v},\\max}}}{\\text{maximize}} & & \\sum_{i=1}^{d_{\\mathtt{v},\\max}}\\frac{\\lambda_i}{i} \\\n& \\text{subject to} & & \\lambda_1 = 0 \\\n& & & \\lambda_i \\geq 0, \\quad \\forall i \\in{2,3,\\ldots,d_{\\mathtt{v},\\max}} \\\n& & & \\sum_{i=2}^{d_{\\mathtt{v},\\max}}\\lambda_i = 1 \\\n& & & \\sum_{i=2}^{d_{\\mathtt{v},\\max}}\\lambda_i\\cdot J\\left(\\mu_c + (i-1)J^{-1}\\left(\\frac{j}{D}\\right)\\right) > 1 - J\\left(\\frac{1}{d_{\\mathtt{c}}-1}J^{-1}\\left(1-\\frac{j}{D}\\right)\\right),\\quad \\forall j \\in {1,\\ldots, D} \\\n& & & \\lambda_2 \\leq \\frac{e^{\\frac{\\mu_c}{4}}}{d_{\\mathtt{c}}-1}\n\\end{aligned}\n$$\nIf this optimization problem is feasible, then the function returns the polynomial $\\lambda(Z)$ as a coefficient array where the first entry corresponds to the largest exponent ($\\lambda_{d_{\\mathtt{v},\\max}}$) and the last entry to the lowest exponent ($\\lambda_1$). If the optimization problem has no solution (e.g., it is unfeasible), then the empty vector is returned.",
"def find_best_lambda(mu_c, v_max, dc): \n # quantization of EXIT chart\n D = 500\n I_range = np.arange(0, D, 1)/D \n \n # Linear Programming model, maximize target expression\n model = pulp.LpProblem(\"Finding best lambda problem\", pulp.LpMaximize)\n\n # definition of variables, v_max entries \\lambda_i that are between 0 and 1 (implicit declaration of constraint 2)\n v_lambda = pulp.LpVariable.dicts(\"lambda\", range(v_max),0,1)\n \n # objective function\n cv = 1/np.arange(v_max,0,-1) \n model += pulp.lpSum(v_lambda[i]*cv[i] for i in range(v_max)) \n \n # constraints\n # constraint 1, no variable nodes of degree 1\n model += v_lambda[v_max-1] == 0\n \n # constraint 3, sum of lambda_i must be 1\n model += pulp.lpSum(v_lambda[i] for i in range(v_max))==1\n \n # constraints 4, fixed point condition for all the descrete xi values (a total number of D, for each \\xi) \n for myI in I_range: \n model += pulp.lpSum(v_lambda[j] * J_fun(mu_c + (v_max-1-j)*invJ_fun(myI)) for j in range(v_max)) - 1 + J_fun(1/(dc-1)*invJ_fun(1-myI)) >= 0\n \n # constraint 5, stability condition\n model += v_lambda[v_max-2] <= np.exp(mu_c/4)/(dc-1)\n\n model.solve()\n if model.status != 1:\n r_lambda = []\n else:\n r_lambda = [v_lambda[i].varValue for i in range(v_max)]\n return r_lambda ",
"As an example, we consider the case of optimization carried out in the lecture after 10 iterations, where we have $\\mu_c = 3.8086$ and $d_{\\mathtt{c}} = 14$ with $d_{\\mathtt{v},\\max}=16$",
"best_lambda = find_best_lambda(3.8086, 16, 14)\nprint(np.poly1d(best_lambda, variable='Z'))",
"In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution.",
"def best_lambda_interactive(mu_c, v_max, dc):\n # get lambda and rho polynomial from optimization and from c_avg, respectively\n p_lambda = find_best_lambda(mu_c, v_max, dc)\n \n # if optimization successful, compute rate and show plot\n if not p_lambda:\n print('Optimization infeasible, no solution found')\n else:\n design_rate = 1 - 1/(dc * np.polyval(np.polyint(p_lambda),1))\n if design_rate <= 0:\n print('Optimization feasible, but no code with positive rate found')\n else:\n print(\"Lambda polynomial:\")\n print(np.poly1d(p_lambda, variable='Z'))\n print(\"Design rate r_d = %1.3f\" % design_rate)\n \n # Plot EXIT-Chart\n print(\"EXIT Chart:\")\n plot.figure(3) \n x = np.linspace(0, 1, num=100)\n y_v = [np.sum([p_lambda[j] * J_fun(mu_c + (v_max-1-j)*invJ_fun(xv)) for j in range(v_max)]) for xv in x] \n y_c = [1-J_fun((dc-1)*invJ_fun(1-xv)) for xv in x] \n plot.plot(x, y_v, '#7030A0')\n plot.plot(y_c, x, '#008000') \n plot.axis('equal')\n plot.gca().set_aspect('equal', adjustable='box')\n plot.xlim(0,1)\n plot.ylim(0,1) \n plot.grid()\n plot.show()\n\ninteractive_plot = interactive(best_lambda_interactive, \\\n mu_c=widgets.FloatSlider(min=0.5,max=8,step=0.01,value=3, continuous_update=False, description=r'\\(\\mu_c\\)',layout=widgets.Layout(width='50%')), \\\n v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\\(d_{\\mathtt{v},\\max}\\)'), \\\n dc = widgets.IntSlider(min=3,max=20,step=1,value=4, continuous_update=False, description=r'\\(d_{\\mathtt{c}}\\)')) \noutput = interactive_plot.children[-1]\noutput.layout.height = '400px'\ninteractive_plot",
"Now, we carry out the optimization over a wide range of $d_{\\mathtt{c},\\text{avg}}$ values for a given $\\epsilon$ and find the largest possible rate.",
"def find_best_rate(mu_c, dv_max, dc_max):\n c_range = np.arange(3, dc_max+1)\n rates = np.zeros_like(c_range,dtype=float)\n \n \n # loop over all c_avg, add progress bar\n f = widgets.FloatProgress(min=0, max=np.size(c_range))\n display(f)\n for index,dc in enumerate(c_range):\n f.value += 1 \n p_lambda = find_best_lambda(mu_c, dv_max, dc) \n if p_lambda: \n design_rate = 1 - 1/(dc * np.polyval(np.polyint(p_lambda),1))\n if design_rate >= 0:\n rates[index] = design_rate\n \n # find largest rate\n largest_rate_index = np.argmax(rates)\n best_lambda = find_best_lambda(mu_c, dv_max, c_range[largest_rate_index])\n print(\"Found best code of rate %1.3f for average check node degree of %1.2f\" % (rates[largest_rate_index], c_range[largest_rate_index]))\n print(\"Corresponding lambda polynomial\")\n print(np.poly1d(best_lambda, variable='Z'))\n \n # Plot curve with all obtained results\n plot.figure(4, figsize=(10,3)) \n plot.plot(c_range, rates, 'b--s',color=(0, 0.59, 0.51))\n plot.plot(c_range[largest_rate_index], rates[largest_rate_index], 'rs')\n plot.xlim(3, dc_max)\n plot.xticks(range(3,dc_max+1))\n plot.ylim(0, 1)\n plot.xlabel('$d_{\\mathtt{c}}$')\n plot.ylabel('design rate $r_d$')\n plot.grid()\n plot.show()\n\n return rates[largest_rate_index]\n \ninteractive_optim = interactive(find_best_rate, \\\n mu_c=widgets.FloatSlider(min=0.1,max=10,step=0.01,value=2, continuous_update=False, description=r'\\(\\mu_c\\)',layout=widgets.Layout(width='50%')), \\\n dv_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\\(d_{\\mathtt{v},\\max}\\)'), \\\n dc_max = widgets.IntSlider(min=3, max=40, step=1, value=22, continuous_update=False, description=r'\\(d_{\\mathtt{c},\\max}\\)'))\noutput = interactive_optim.children[-1]\noutput.layout.height = '400px'\ninteractive_optim",
"Running binary search to find code with a given target rate for the AWGN channel",
"target_rate = 0.7\ndv_max = 16\ndc_max = 22\n\nT_Delta = 0.01\nmu_c = 10\nDelta_mu = 10\n\nwhile Delta_mu >= T_Delta: \n print('Running optimization for mu_c = %1.5f, corresponding to Es/N0 = %1.2f dB' % (mu_c, 10*np.log10(mu_c/4)))\n \n rate = find_best_rate(mu_c, dv_max, dc_max)\n if rate > target_rate:\n mu_c = mu_c - Delta_mu / 2\n else:\n mu_c = mu_c + Delta_mu / 2\n \n Delta_mu = Delta_mu / 2"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
UWSEDS/LectureNotes | PreFall2018/02-Python-and-Data/Lecture-Python-and-Data.ipynb | bsd-2-clause | [
"# Some styling stuff... ignore this for now!\nfrom IPython.display import HTML\nHTML(\"\"\"<style>\n .rendered_html {font-size: 140%;}\n .rendered_html h1, h2 {text-align:center;}\n</style>\"\"\")",
"Software Engineering for Data Scientists\nManipulating Data with Python\nCSE 599 B1\nToday's Objectives\n1. Opening & Navigating the IPython Notebook\n2. Simple Math in the IPython Notebook\n3. Loading data with pandas\n4. Cleaning and Manipulating data with pandas\n5. Visualizing data with pandas\n1. Opening and Navigating the IPython Notebook\nWe will start today with the interactive environment that we will be using often through the course: the IPython/Jupyter Notebook.\nWe will walk through the following steps together:\n\n\nDownload miniconda (be sure to get Version 3.5) and install it on your system (hopefully you have done this before coming to class)\n \n\n\nUse the conda command-line tool to update your package listing and install the IPython notebook:\n\n\nUpdate conda's listing of packages for your system:\n $ conda update conda\nInstall IPython notebook and all its requirements\n $ conda install ipython-notebook\n\nNavigate to the directory containing the course material. For example:\n\n$ cd ~/courses/CSE599/\nYou should see a number of files in the directory, including these:\n$ ls\n ...\n Breakout-Simple-Math.ipynb\n CSE599_Lecture_2.ipynb\n ...\n\nType ipython notebook in the terminal to start the notebook\n\n$ ipython notebook\nIf everything has worked correctly, it should automatically launch your default browser\n \n\nClick on CSE599_Lecture_2.ipynb to open the notebook containing the content for this lecture.\n\nWith that, you're set up to use the IPython notebook!\n2. Simple Math in the IPython Notebook\nNow that we have the IPython notebook up and running, we're going to do a short breakout exploring some of the mathematical functionality that Python offers.\nPlease open Breakout-Simple-Math.ipynb, find a partner, and make your way through that notebook, typing and executing code along the way.\n3. Loading data with pandas\nWith this simple Python computation experience under our belt, we can now move to doing some more interesting analysis.\nPython's Data Science Ecosystem\nIn addition to Python's built-in modules like the math module we explored above, there are also many often-used third-party modules that are core tools for doing data science with Python.\nSome of the most important ones are:\nnumpy: Numerical Python\nNumpy is short for \"Numerical Python\", and contains tools for efficient manipulation of arrays of data.\nIf you have used other computational tools like IDL or MatLab, Numpy should feel very familiar.\nscipy: Scientific Python\nScipy is short for \"Scientific Python\", and contains a wide range of functionality for accomplishing common scientific tasks, such as optimization/minimization, numerical integration, interpolation, and much more.\nWe will not look closely at Scipy today, but we will use its functionality later in the course.\npandas: Labeled Data Manipulation in Python\nPandas is short for \"Panel Data\", and contains tools for doing more advanced manipulation of labeled data in Python, in particular with a columnar data structure called a Data Frame.\nIf you've used the R statistical language (and in particular the so-called \"Hadley Stack\"), much of the functionality in Pandas should feel very familiar.\nmatplotlib: Visualization in Python\nMatplotlib started out as a Matlab plotting clone in Python, and has grown from there in the 15 years since its creation. It is the most popular data visualization tool currently in the Python data world (though other recent packages are starting to encroach on its monopoly).\nInstalling Pandas & friends\nBecause the above packages are not included in Python itself, you need to install them separately. While it is possible to install these from source (compiling the C and/or Fortran code that does the heavy lifting under the hood) it is much easier to use a package manager like conda. All it takes is to run\n$ conda install numpy scipy pandas matplotlib\nand (so long as your conda setup is working) the packages will be downloaded and installed on your system.\nLoading Data with Pandas",
"import pandas",
"Because we'll use it so much, we often import under a shortened name using the import ... as ... pattern:",
"import pandas as pd",
"Now we can use the read_csv command to read the comma-separated-value data:\nViewing Pandas Dataframes\nThe head() and tail() methods show us the first and last rows of the data\nThe shape attribute shows us the number of elements:\nThe columns attribute gives us the column names\nThe index attribute gives us the index names\nThe dtypes attribute gives the data types of each column:\n4. Manipulating data with pandas\nHere we'll cover some key features of manipulating data with pandas\nAccess columns by name using square-bracket indexing:\nMathematical operations on columns happen element-wise:\nColumns can be created (or overwritten) with the assignment operator.\nLet's create a tripminutes column with the number of minutes for each trip\nWorking with Times\nOne trick to know when working with columns of times is that Pandas DateTimeIndex provides a nice interface for working with columns of times:\nWith it, we can extract, the hour of the day, the day of the week, the month, and a wide range of other views of the time:\nSimple Grouping of Data\nThe real power of Pandas comes in its tools for grouping and aggregating data. Here we'll look at value counts and the basics of group-by operations.\nValue Counts\nPandas includes an array of useful functionality for manipulating and analyzing tabular data.\nWe'll take a look at two of these here.\nThe pandas.value_counts returns statistics on the unique values within each column.\nWe can use it, for example, to break down rides by gender:\nOr to break down rides by age:\nWhat else might we break down rides by?\nGroup-by Operation\nOne of the killer features of the Pandas dataframe is the ability to do group-by operations.\nYou can visualize the group-by like this (image borrowed from the Python Data Science Handbook)",
"from IPython.display import Image\nImage('split_apply_combine.png')",
"So, for example, we can use this to find the average length of a ride as a function of time of day:\nThe simplest version of a groupby looks like this, and you can use almost any aggregation function you wish (mean, median, sum, minimum, maximum, standard deviation, count, etc.)\n<data object>.groupby(<grouping values>).<aggregate>()\nYou can even group by multiple values: for example we can look at the trip duration by time of day and by gender:\nThe unstack() operation can help make sense of this type of multiply-grouped data:\n5. Visualizing data with pandas\nOf course, looking at tables of data is not very intuitive.\nFortunately Pandas has many useful plotting functions built-in, all of which make use of the matplotlib library to generate plots.\nWhenever you do plotting in the IPython notebook, you will want to first run this magic command which configures the notebook to work well with plots:",
"%matplotlib inline",
"Now we can simply call the plot() method of any series or dataframe to get a reasonable view of the data:\nAdjusting the Plot Style\nThe default formatting is not very nice; I often make use of the Seaborn library for better plotting defaults.\nFirst you'll have to\n$ conda install seaborn\nand then you can do this:",
"import seaborn\nseaborn.set()",
"And now re-run the plot from above:\nOther plot types\nPandas supports a range of other plotting types; you can find these by using the <TAB> autocomplete on the plot method:\nFor example, we can create a histogram of trip durations:\nIf you'd like to adjust the x and y limits of the plot, you can use the set_xlim() and set_ylim() method of the resulting object:\nBreakout: Exploring the Data\n\n\nMake a plot of the total number of rides as a function of month of the year (You'll need to extract the month, use a groupby, and find the appropriate aggregation to count the number in each group).\n\n\nSplit this plot by gender. Do you see any seasonal ridership patterns by gender?\n\n\nSplit this plot by user type. Do you see any seasonal ridership patterns by usertype?\n\n\nRepeat the above three steps, counting the number of rides by time of day rather thatn by month.\n\n\nAre there any other interesting insights you can discover in the data using these tools?\n\n\nLooking Forward to Homework\nIn the homework this week, you will have a chance to apply some of these patterns to a brand new (but closely related) dataset."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Upward-Spiral-Science/grelliam | code/classification_simulation.ipynb | apache-2.0 | [
"Simulated Classifcation\n\nState assumptions\nFormally define classification/regression problem\nprovide algorithm for solving problem (including choosing hyperparameters as appropriate)\nsample data from a simulation setting inspired by your data (from both null and alternative as defined before)\ncompute accuracy\nplot accuracy vs. sample size in simulation\napply method directly on real data\nexplain the degree to which you believe the result and why\n\nStep 1: State assumptions\n$F_{X|0} = ER(p_0) = Bern(p_0)^{V \\times V}$ <br/>\n$F_{X|1} = ER(p_1) = Bern(p_1)^{V \\times V}$\n$p_1 \\neq p_2$\nStep 2: Formally define classification/regression problem\n$G_i, Y_i \\sim \\mathscr{F}{G,Y} = { F{G,Y}(\\cdot; \\theta) : \\theta \\in \\Theta }$.\nSince, all samples observed are graph matched (i.e. nodes are equal across graphs), we can look at just the distribution of adjacency matrices:\n$F_{G,Y} = F_{X,Y}$.\nThus,\n$X_i = \\prod_{u,v}^{\\mathcal{E}} A_{uv}$, where $\\mathcal{E} \\subset V \\times V$ <br/>\n$Y_i = {0,1}$\nAs we are doing classification, we are trying to minimize expected error. Here, expected error can be defined as:\n$E[l] = \\sum \\Theta(\\hat{Y}_i \\neq Y_i)$\nWhere $\\Theta$ is the indicator function.\nStep 3: Provide algorithm for solving problem (including choosing hyperparameters as appropriate)\nclassification:\n- lda (linear discriminant analysis): no parameters\n- qda (quadratic discriminant analysis): no parameters\n- svm (support vector machine): penalty parameters set to 0.5 because it was a default suggested \n- knn (k-nearest neighbours): number of neighbors set to 3 because it was a default suggested\n- rf (random forest): like the above, I didn't have better insight so went with defaults. Seemed like a simple starting point, as we always aim for.\nregression: linear regression, support vector regression, k-nearest neighbour regression, random forest regression, polynomial regression\nSetup Step",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport csv\nimport igraph as ig\n\nfrom sklearn import cross_validation\nfrom sklearn.cross_validation import LeaveOneOut\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\n\n%matplotlib inline\n\nnp.random.seed(12345678) # for reproducibility, set random seed\nr = 20 # define number of rois\nN = 100 # number of samples at each iteration\np0 = 0.10\np1 = 0.15\n# define number of subjects per class\nS = np.array((8, 16, 20, 32, 40, 64, 80, 100, 120, 200, 320,\n 400, 800, 1000))\n\nnames = [\"Nearest Neighbors\", \"Linear SVM\", \"Random Forest\",\n \"Linear Discriminant Analysis\", \"Quadratic Discriminant Analysis\"]\n\nclassifiers = [\n KNeighborsClassifier(3),\n SVC(kernel=\"linear\", C=0.5),\n RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),\n LinearDiscriminantAnalysis(),\n QuadraticDiscriminantAnalysis()]",
"Steps 4 & 5: Sample data from setting similar to data and record classification accuracy",
"accuracy = np.zeros((len(S), len(classifiers), 2), dtype=np.dtype('float64'))\nfor idx1, s in enumerate(S):\n s0=s/2\n s1=s/2\n\n g0 = 1 * (np.random.rand( r, r, s0) > 1-p0)\n g1 = 1 * (np.random.rand( r, r, s1) > 1-p1)\n mbar0 = 1.0*np.sum(g0, axis=(0,1))\n mbar1 = 1.0*np.sum(g1, axis=(0,1))\n\n X = np.array((np.append(mbar0, mbar1), np.append(mbar0/( r**2), mbar1/( r**2 )))).T\n y = np.append(np.zeros(s0), np.ones(s1))\n \n for idx2, cla in enumerate(classifiers):\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)\n clf = cla.fit(X_train, y_train)\n loo = LeaveOneOut(len(X))\n scores = cross_validation.cross_val_score(clf, X, y, cv=loo)\n accuracy[idx1, idx2,] = [scores.mean(), scores.std()]\n print(\"Accuracy of %s: %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))\n \nprint accuracy",
"Step 6: Plot Accuracy versus N",
"font = {'weight' : 'bold',\n 'size' : 14}\n\nimport matplotlib\nmatplotlib.rc('font', **font)\n\nplt.figure(figsize=(8,5))\nplt.errorbar(S, accuracy[:,0,0], yerr = accuracy[:,0,1]/np.sqrt(S), hold=True, label=names[0])\nplt.errorbar(S, accuracy[:,1,0], yerr = accuracy[:,1,1]/np.sqrt(S), color='green', hold=True, label=names[1])\nplt.errorbar(S, accuracy[:,2,0], yerr = accuracy[:,2,1]/np.sqrt(S), color='red', hold=True, label=names[2])\nplt.errorbar(S, accuracy[:,3,0], yerr = accuracy[:,3,1]/np.sqrt(S), color='black', hold=True, label=names[3])\nplt.errorbar(S, accuracy[:,4,0], yerr = accuracy[:,4,1]/np.sqrt(S), color='brown', hold=True, label=names[4])\nplt.xscale('log')\nplt.xlabel('Number of Samples')\nplt.xlim((0,2100))\nplt.ylim((-0.05, 1.05))\nplt.ylabel('Accuracy')\nplt.title('Gender Classification of Simulated Data')\nplt.axhline(1, color='red', linestyle='--')\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\nplt.savefig('../figs/general_classification.png')\nplt.show()",
"Step 7: Apply technique to data",
"# Initializing dataset names\ndnames = list(['../data/KKI2009'])\nprint \"Dataset: \" + \", \".join(dnames)\n\n# Getting graph names\nfs = list()\nfor dd in dnames:\n fs.extend([root+'/'+file for root, dir, files in os.walk(dd) for file in files])\nfs = fs[1:]\ndef loadGraphs(filenames, rois, printer=False):\n A = np.zeros((rois, rois, len(filenames)))\n for idx, files in enumerate(filenames):\n if printer:\n print \"Loading: \" + files\n g = ig.Graph.Read_GraphML(files)\n tempg = g.get_adjacency(attribute='weight')\n A[:,:,idx] = np.asarray(tempg.data)\n \n return A\n\n# Load X\nX = loadGraphs(fs, 70)\nprint X.shape\n\n# Load Y\nys = csv.reader(open('../data/kki42_subjectinformation.csv'))\ny = [y[5] for y in ys]\ny = [1 if x=='F' else 0 for x in y[1:]]\n\nxf = 1.0*np.sum(1.0*(X>0), axis=(0,1))\nfeatures = np.array((xf, xf/( 70**2 * 22))).T\n\naccuracy=np.zeros((len(classifiers),2))\nfor idx, cla in enumerate(classifiers):\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(features, y, test_size=0.4, random_state=0)\n clf = cla.fit(X_train, y_train)\n loo = LeaveOneOut(len(features))\n scores = cross_validation.cross_val_score(clf, features, y, cv=loo)\n accuracy[idx,] = [scores.mean(), scores.std()]\n print(\"Accuracy of %s: %0.2f (+/- %0.2f)\" % (names[idx], scores.mean(), scores.std() * 2))",
"Step 8: Reflect on result\nThe classification accuracy on real data based on the five tested classifiers is, at best, 71%, and worst, chance. Next, I need to test my assumptions to see if they are accurate and adjust my processing/features to better represent my true scenario than the assumed conditions, if possible."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
machinelearningnanodegree/stanford-cs231 | solutions/levin/assignment2/FullyConnectedNets.ipynb | mit | [
"Fully-Connected Neural Nets\nIn the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.\nIn this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:\n```python\ndef layer_forward(x, w):\n \"\"\" Receive inputs x and weights w \"\"\"\n # Do some computations ...\n z = # ... some intermediate value\n # Do some more computations ...\n out = # the output\ncache = (x, w, z, out) # Values we need to compute gradients\nreturn out, cache\n```\nThe backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:\n```python\ndef layer_backward(dout, cache):\n \"\"\"\n Receive derivative of loss with respect to outputs and cache,\n and compute derivative with respect to inputs.\n \"\"\"\n # Unpack cache values\n x, w, z, out = cache\n# Use values in cache to compute derivatives\n dx = # Derivative of loss with respect to x\n dw = # Derivative of loss with respect to w\nreturn dx, dw\n```\nAfter implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.\nIn addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.",
"# As usual, a bit of setup\nimport sys\nimport os\nsys.path.insert(0, os.path.abspath('..'))\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape",
"Affine layer: foward\nOpen the file cs231n/layers.py and implement the affine_forward function.\nOnce you are done you can test your implementaion by running the following:",
"# Test the affine_forward function\n\nnum_inputs = 2\ninput_shape = (4, 5, 6)\noutput_dim = 3\n\ninput_size = num_inputs * np.prod(input_shape)\nweight_size = output_dim * np.prod(input_shape)\n\nx = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)\nw = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)\nb = np.linspace(-0.3, 0.1, num=output_dim)\n\nout, _ = affine_forward(x, w, b)\ncorrect_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],\n [ 3.25553199, 3.5141327, 3.77273342]])\n\n# Compare your output with ours. The error should be around 1e-9.\nprint 'Testing affine_forward function:'\nprint 'difference: ', rel_error(out, correct_out)",
"Affine layer: backward\nNow implement the affine_backward function and test your implementation using numeric gradient checking.",
"# Test the affine_backward function\n\nx = np.random.randn(10, 2, 3)\nw = np.random.randn(6, 5)\nb = np.random.randn(5)\ndout = np.random.randn(10, 5)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)\n\n_, cache = affine_forward(x, w, b)\ndx, dw, db = affine_backward(dout, cache)\n\n# The error should be around 1e-10\nprint 'Testing affine_backward function:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)",
"ReLU layer: forward\nImplement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:",
"# Test the relu_forward function\n\nx = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)\n\nout, _ = relu_forward(x)\ncorrect_out = np.array([[ 0., 0., 0., 0., ],\n [ 0., 0., 0.04545455, 0.13636364,],\n [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])\n\n# Compare your output with ours. The error should be around 1e-8\nprint 'Testing relu_forward function:'\nprint 'difference: ', rel_error(out, correct_out)",
"ReLU layer: backward\nNow implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:",
"x = np.random.randn(10, 10)\ndout = np.random.randn(*x.shape)\n\ndx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)\n\n_, cache = relu_forward(x)\ndx = relu_backward(dout, cache)\n\n# The error should be around 1e-12\nprint 'Testing relu_backward function:'\nprint 'dx error: ', rel_error(dx_num, dx)",
"\"Sandwich\" layers\nThere are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.\nFor now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:",
"from cs231n.layer_utils import affine_relu_forward, affine_relu_backward\n\nx = np.random.randn(2, 3, 4)\nw = np.random.randn(12, 10)\nb = np.random.randn(10)\ndout = np.random.randn(2, 10)\n\nout, cache = affine_relu_forward(x, w, b)\ndx, dw, db = affine_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)\n\nprint 'Testing affine_relu_forward:'\nprint 'dx error: ', rel_error(dx_num, dx)\nprint 'dw error: ', rel_error(dw_num, dw)\nprint 'db error: ', rel_error(db_num, db)",
"Loss layers: Softmax and SVM\nYou implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.\nYou can make sure that the implementations are correct by running the following:",
"num_classes, num_inputs = 10, 50\nx = 0.001 * np.random.randn(num_inputs, num_classes)\ny = np.random.randint(num_classes, size=num_inputs)\n\ndx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)\nloss, dx = svm_loss(x, y)\n\n# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9\nprint 'Testing svm_loss:'\nprint 'loss: ', loss\nprint 'dx error: ', rel_error(dx_num, dx)\n\ndx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)\nloss, dx = softmax_loss(x, y)\n\n# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8\nprint '\\nTesting softmax_loss:'\nprint 'loss: ', loss\nprint 'dx error: ', rel_error(dx_num, dx)",
"Two-layer network\nIn the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.\nOpen the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.",
"N, D, H, C = 3, 5, 50, 7\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=N)\n\nstd = 1e-2\nmodel = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)\n\nprint 'Testing initialization ... '\nW1_std = abs(model.params['W1'].std() - std)\nb1 = model.params['b1']\nW2_std = abs(model.params['W2'].std() - std)\nb2 = model.params['b2']\nassert W1_std < std / 10, 'First layer weights do not seem right'\nassert np.all(b1 == 0), 'First layer biases do not seem right'\nassert W2_std < std / 10, 'Second layer weights do not seem right'\nassert np.all(b2 == 0), 'Second layer biases do not seem right'\n\nprint 'Testing test-time forward pass ... '\nmodel.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)\nmodel.params['b1'] = np.linspace(-0.1, 0.9, num=H)\nmodel.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)\nmodel.params['b2'] = np.linspace(-0.9, 0.1, num=C)\nX = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T\nscores = model.loss(X)\ncorrect_scores = np.asarray(\n [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],\n [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],\n [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])\nscores_diff = np.abs(scores - correct_scores).sum()\nassert scores_diff < 1e-6, 'Problem with test-time forward pass'\n\nprint 'Testing training loss (no regularization)'\ny = np.asarray([0, 5, 1])\nloss, grads = model.loss(X, y)\ncorrect_loss = 3.4702243556\nassert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'\n\nmodel.reg = 1.0\nloss, grads = model.loss(X, y)\ncorrect_loss = 26.5948426952\nassert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'\n\nfor reg in [0.0, 0.7]:\n print 'Running numeric gradient check with reg = ', reg\n model.reg = reg\n loss, grads = model.loss(X, y)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))",
"Solver\nIn the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.\nOpen the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.",
"# model = TwoLayerNet()\n# solver = None\n\n##############################################################################\n# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #\n# 50% accuracy on the validation set. #\n##############################################################################\ninput_dim=3*32*32\nhidden_dim=100\nnum_classes=10\nweight_scale=1e-3\nreg=0.0\nmodel = TwoLayerNet(input_dim=input_dim, hidden_dim=hidden_dim, num_classes=num_classes,\n weight_scale=weight_scale, reg=reg)\n\nsolver = Solver(model, data,\n update_rule='sgd',\n optim_config={\n 'learning_rate': 1e-3,\n },\n lr_decay=0.95,\n num_epochs=10, batch_size=100,\n print_every=100)\nsolver.train()\n##############################################################################\n# END OF YOUR CODE #\n##############################################################################\n\n# Run this cell to visualize training loss and train / val accuracy\n\nplt.subplot(2, 1, 1)\nplt.title('Training loss')\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('Iteration')\n\nplt.subplot(2, 1, 2)\nplt.title('Accuracy')\nplt.plot(solver.train_acc_history, '-o', label='train')\nplt.plot(solver.val_acc_history, '-o', label='val')\nplt.plot([0.5] * len(solver.val_acc_history), 'k--')\nplt.xlabel('Epoch')\nplt.legend(loc='lower right')\nplt.gcf().set_size_inches(15, 12)\nplt.show()",
"Multilayer network\nNext you will implement a fully-connected network with an arbitrary number of hidden layers.\nRead through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.\nImplement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.\nInitial loss and gradient check\nAs a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?\nFor gradient checking, you should expect to see errors around 1e-6 or less.",
"N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print 'Running check with reg = ', reg\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))",
"As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a three-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nlearning_rate = 1e-2\nweight_scale = 1e-2\nmodel = FullyConnectedNet([100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()",
"Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.",
"# TODO: Use a five-layer Net to overfit 50 training examples.\n\nnum_train = 50\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nlearning_rate = 1e-2\nweight_scale = 6e-2\nmodel = FullyConnectedNet([100, 100, 100, 100],\n weight_scale=weight_scale, dtype=np.float64)\nsolver = Solver(model, small_data,\n print_every=10, num_epochs=20, batch_size=25,\n update_rule='sgd',\n optim_config={\n 'learning_rate': learning_rate,\n }\n )\nsolver.train()\n\nplt.plot(solver.loss_history, 'o')\nplt.title('Training loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Training loss')\nplt.show()",
"Inline question:\nDid you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?\nAnswer:\nIt's much harder to find the right weight initialization and learning rate for five layer net. As the network grows deeper, we tend to have more dead activations, and thus kill the backward gradient.\nUpdate rules\nSo far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.\nSGD+Momentum\nStochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.\nOpen the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.",
"from cs231n.optim import sgd_momentum\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nv = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-3, 'velocity': v}\nnext_w, _ = sgd_momentum(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],\n [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],\n [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],\n [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])\nexpected_velocity = np.asarray([\n [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],\n [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],\n [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],\n [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])\n\nprint 'next_w error: ', rel_error(next_w, expected_next_w)\nprint 'velocity error: ', rel_error(expected_velocity, config['velocity'])",
"Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.",
"num_train = 4000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\n\nfor update_rule in ['sgd', 'sgd_momentum']:\n print 'running with ', update_rule\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': 1e-2,\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in solvers.iteritems():\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"RMSProp and Adam\nRMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.\nIn the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.\n[1] Tijmen Tieleman and Geoffrey Hinton. \"Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.\" COURSERA: Neural Networks for Machine Learning 4 (2012).\n[2] Diederik Kingma and Jimmy Ba, \"Adam: A Method for Stochastic Optimization\", ICLR 2015.",
"# Test RMSProp implementation; you should see errors less than 1e-7\nfrom cs231n.optim import rmsprop\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\ncache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'cache': cache}\nnext_w, _ = rmsprop(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],\n [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],\n [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],\n [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])\nexpected_cache = np.asarray([\n [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],\n [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],\n [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],\n [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])\n\nprint 'next_w error: ', rel_error(expected_next_w, next_w)\nprint 'cache error: ', rel_error(expected_cache, config['cache'])\n\n# Test Adam implementation; you should see errors around 1e-7 or less\nfrom cs231n.optim import adam\n\nN, D = 4, 5\nw = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)\ndw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)\nm = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)\nv = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)\n\nconfig = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}\nnext_w, _ = adam(w, dw, config=config)\n\nexpected_next_w = np.asarray([\n [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\n [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],\n [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],\n [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])\nexpected_v = np.asarray([\n [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],\n [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],\n [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],\n [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])\nexpected_m = np.asarray([\n [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],\n [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],\n [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],\n [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])\n\nprint 'next_w error: ', rel_error(expected_next_w, next_w)\nprint 'v error: ', rel_error(expected_v, config['v'])\nprint 'm error: ', rel_error(expected_m, config['m'])",
"Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:",
"learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}\nfor update_rule in ['adam', 'rmsprop']:\n print 'running with ', update_rule\n model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)\n\n solver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rates[update_rule]\n },\n verbose=True)\n solvers[update_rule] = solver\n solver.train()\n print\n\nplt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nfor update_rule, solver in solvers.iteritems():\n plt.subplot(3, 1, 1)\n plt.plot(solver.loss_history, 'o', label=update_rule)\n \n plt.subplot(3, 1, 2)\n plt.plot(solver.train_acc_history, '-o', label=update_rule)\n\n plt.subplot(3, 1, 3)\n plt.plot(solver.val_acc_history, '-o', label=update_rule)\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Train a good model!\nTrain the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.\nIf you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.\nYou might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.",
"best_model = None\n################################################################################\n# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #\n# batch normalization and dropout useful. Store your best model in the #\n# best_model variable. #\n################################################################################\nnum_train = data['X_train'].shape[0]\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\ndropout=0.1\nmodel = FullyConnectedNet([100, 100, 100], weight_scale=5e-2, use_batchnorm=True, dropout=dropout)\n\nupdate_rule = 'adam'\nlearning_rate = 1e-3\nsolver = Solver(model, small_data,\n num_epochs=5, batch_size=100,\n update_rule=update_rule,\n optim_config={\n 'learning_rate': learning_rate\n },\n verbose=True)\nsolver.train()\nbest_model = model\n################################################################################\n# END OF YOUR CODE #\n################################################################################",
"Test you model\nRun your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.",
"X_test = data['X_test']\ny_test = data['y_test']\nX_val = data['X_val']\ny_val = data['y_val']\ny_test_pred = np.argmax(best_model.loss(X_test), axis=1)\ny_val_pred = np.argmax(best_model.loss(X_val), axis=1)\nprint 'Validation set accuracy: ', (y_val_pred == y_val).mean()\nprint 'Test set accuracy: ', (y_test_pred == y_test).mean()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ktmud/deep-learning | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | [
"Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.",
"def model_inputs(real_dim, z_dim):\n inputs_real = \n inputs_z = \n \n return inputs_real, inputs_z",
"Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.\n\nExercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.",
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out: \n '''\n with tf.variable_scope # finish this\n # Hidden layer\n h1 = \n # Leaky ReLU\n h1 = \n \n # Logits and tanh output\n logits = \n out = \n \n return out",
"Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.",
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope # finish this\n # Hidden layer\n h1 =\n # Leaky ReLU\n h1 =\n \n logits =\n out =\n \n return out, logits",
"Hyperparameters",
"# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.1",
"Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.",
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = \n\n# Generator network here\ng_model = \n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = \nd_model_fake, d_logits_fake = ",
"Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will be sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\nExercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.",
"# Calculate losses\nd_loss_real = \n\nd_loss_fake = \n\nd_loss = \n\ng_loss = ",
"Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.",
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = \ng_vars = \nd_vars = \n\nd_train_opt = \ng_train_opt = ",
"Training",
"batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Training loss\nHere we'll check out the training losses for the generator and discriminator.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_ = view_samples(-1, samples)",
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Hugovdberg/timml | notebooks/timml_notebook1_sol.ipynb | mit | [
"TimML Notebook 1\nA well in uniform flow\nConsider a well in the middle aquifer of a three aquifer system. Aquifer properties are given in Table 1. The well is located at $(x,y)=(0,0)$, the discharge is $Q=10,000$ m$^3$/d and the radius is 0.2 m. There is a uniform flow from West to East with a gradient of 0.002. The head is fixed to 20 m at a distance of 10,000 m downstream of the well. Here is the cookbook recipe to build this model:\n\nImport pylab to use numpy and plotting: from pylab import *\nSet figures to be in the notebook with %matplotlib notebook\nImport everything from TimML: from timml import *\nCreate the model and give it a name, for example ml with the command ml = ModelMaq(kaq, z, c) (substitute the correct lists for kaq, z, and c).\nEnter the well with the command w = Well(ml, xw, yw, Qw, rw, layers), where the well is called w.\nEnter uniform flow with the command Uflow(ml, slope, angle).\nEnter the reference head with Constant(ml, xr, yr, head, layer).\nSolve the model ml.solve()\n\nTable 1: Aquifer data for exercise 1\n|Layer |$k$ (m/d)|$z_b$ (m)|$z_t$|$c$ (days)|\n|-------------|--------:|--------:|----:|---------:|\n|Aquifer 0 | 10 | -20 | 0 | - |\n|Leaky Layer 1| - | -40 | -20 | 4000 | \n|Aquifer 1 | 20 | -80 | -40 | - |\n|Leaky Layer 2| - | -90 | -80 | 10000 | \n|Aquifer 2 | 5 | -140 | -90 | - ||",
"%matplotlib inline\nfrom pylab import *\nfrom timml import *\nfigsize=(8, 8)\n\nml = ModelMaq(kaq=[10, 20, 5],\n z=[0, -20, -40, -80, -90, -140], \n c=[4000, 10000])\nw = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)\nConstant(ml, xr=10000, yr=0, hr=20, layer=0)\nUflow(ml, slope=0.002, angle=0)\nml.solve()",
"Questions:\nExercise 1a\nWhat are the leakage factors of the aquifer system?",
"print('The leakage factors of the aquifers are:')\nprint(ml.aq.lab)",
"Exercise 1b\nWhat is the head at the well?",
"print('The head at the well is:')\nprint(w.headinside())",
"Exercise 1c\nCreate a contour plot of the head in the three aquifers. Use a window with lower left hand corner $(x,y)=(−3000,−3000)$ and upper right hand corner $(x,y)=(3000,3000)$. Notice that the heads in the three aquifers are almost equal at three times the largest leakage factor.",
"ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[0, 1, 2], levels=10, \n legend=True, figsize=figsize)",
"Exercise 1d\nCreate a contour plot of the head in aquifer 1 with labels along the contours. Labels are added when the labels keyword argument is set to True. The number of decimal places can be set with the decimals keyword argument, which is zero by default.",
"ml.contour(win=[-3000, 3000, -3000, 3000], ngr=50, layers=[1], levels=np.arange(30, 45, 1), \n labels=True, legend=['layer 1'], figsize=figsize)",
"Exercise 1e\nCreate a contour plot with a vertical cross-section below it. Start three pathlines from $(x,y)=(-2000,-1000)$ at levels $z=-120$, $z=-60$, and $z=-10$. Try a few other starting locations.",
"win=[-3000, 3000, -3000, 3000]\nml.plot(win=win, orientation='both', figsize=figsize)\nml.tracelines(-2000 * ones(3), -1000 * ones(3), [-120, -60, -10], hstepmax=50, \n win=win, orientation='both')\nml.tracelines(0 * ones(3), 1000 * ones(3), [-120, -50, -10], hstepmax=50, \n win=win, orientation='both')",
"Exercise 1f\nAdd an abandoned well that is screened in both aquifer 0 and aquifer 1, located at $(x, y) = (100, 100)$ and create contour plot of all aquifers near the well (from (-200,-200) till (200,200)). What are the discharge and the head at the abandoned well? Note that you have to solve the model again!",
"ml = ModelMaq(kaq=[10, 20, 5],\n z=[0, -20, -40, -80, -90, -140], \n c=[4000, 10000])\nw = Well(ml, xw=0, yw=0, Qw=10000, rw=0.2, layers=1)\nConstant(ml, xr=10000, yr=0, hr=20, layer=0)\nUflow(ml, slope=0.002, angle=0)\nwabandoned = Well(ml, xw=100, yw=100, Qw=0, rw=0.2, layers=[0, 1])\nml.solve()\nml.contour(win=[-200, 200, -200, 200], ngr=50, layers=[0, 2], \n levels=20, color=['C0', 'C1', 'C2'], legend=True, figsize=figsize)\n\nprint('The head at the abandoned well is:')\nprint(wabandoned.headinside())\nprint('The discharge at the abandoned well is:')\nprint(wabandoned.discharge())"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dsevilla/bdge | mongo/sesion4.ipynb | mit | [
"NoSQL (MongoDB) (sesión 4)\n\nEsta hoja muestra cómo acceder a bases de datos MongoDB y también a conectar la salida con Jupyter. Se puede utilizar el shell propio de MongoDB en la máquina virtual usando el programa mongo. La diferencia es que ese programa espera código Javascript y aquí trabajaremos con Python.",
"!pip install --upgrade pymongo\n\nfrom pprint import pprint as pp\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n%matplotlib inline\nmatplotlib.style.use('ggplot')",
"Usaremos la librería pymongo para python. La cargamos a continuación.",
"import pymongo\nfrom pymongo import MongoClient",
"La conexión se inicia con MongoClient en el host descrito en el fichero docker-compose.yml (mongo).",
"client = MongoClient(\"mongo\",27017)\nclient\n\nclient.list_database_names()",
"Format: 7zipped\nFiles:\nbadges.xml\nUserId, e.g.: \"420\"\nName, e.g.: \"Teacher\"\nDate, e.g.: \"2008-09-15T08:55:03.923\"\n\n\ncomments.xml\nId\nPostId\nScore\nText, e.g.: \"@Stu Thompson: Seems possible to me - why not try it?\"\nCreationDate, e.g.:\"2008-09-06T08:07:10.730\"\nUserId\n\n\nposts.xml\nId\nPostTypeId\n1: Question\n2: Answer\n\n\nParentID (only present if PostTypeId is 2)\nAcceptedAnswerId (only present if PostTypeId is 1)\nCreationDate\nScore\nViewCount\nBody\nOwnerUserId\nLastEditorUserId\nLastEditorDisplayName=\"Jeff Atwood\"\nLastEditDate=\"2009-03-05T22:28:34.823\"\nLastActivityDate=\"2009-03-11T12:51:01.480\"\nCommunityOwnedDate=\"2009-03-11T12:51:01.480\"\nClosedDate=\"2009-03-11T12:51:01.480\"\nTitle=\nTags=\nAnswerCount\nCommentCount\nFavoriteCount\n\n\nposthistory.xml\nId\nPostHistoryTypeId\n - 1: Initial Title - The first title a question is asked with.\n - 2: Initial Body - The first raw body text a post is submitted with.\n - 3: Initial Tags - The first tags a question is asked with.\n - 4: Edit Title - A question's title has been changed.\n - 5: Edit Body - A post's body has been changed, the raw text is stored here as markdown.\n - 6: Edit Tags - A question's tags have been changed.\n - 7: Rollback Title - A question's title has reverted to a previous version.\n - 8: Rollback Body - A post's body has reverted to a previous version - the raw text is stored here.\n - 9: Rollback Tags - A question's tags have reverted to a previous version.\n - 10: Post Closed - A post was voted to be closed.\n - 11: Post Reopened - A post was voted to be reopened.\n - 12: Post Deleted - A post was voted to be removed.\n - 13: Post Undeleted - A post was voted to be restored.\n - 14: Post Locked - A post was locked by a moderator.\n - 15: Post Unlocked - A post was unlocked by a moderator.\n - 16: Community Owned - A post has become community owned.\n - 17: Post Migrated - A post was migrated.\n - 18: Question Merged - A question has had another, deleted question merged into itself.\n - 19: Question Protected - A question was protected by a moderator\n - 20: Question Unprotected - A question was unprotected by a moderator\n - 21: Post Disassociated - An admin removes the OwnerUserId from a post.\n - 22: Question Unmerged - A previously merged question has had its answers and votes restored.\nPostId\nRevisionGUID: At times more than one type of history record can be recorded by a single action. All of these will be grouped using the same RevisionGUID\nCreationDate: \"2009-03-05T22:28:34.823\"\nUserId\nUserDisplayName: populated if a user has been removed and no longer referenced by user Id\nComment: This field will contain the comment made by the user who edited a post\nText: A raw version of the new value for a given revision\nIf PostHistoryTypeId = 10, 11, 12, 13, 14, or 15 this column will contain a JSON encoded string with all users who have voted for the PostHistoryTypeId\nIf PostHistoryTypeId = 17 this column will contain migration details of either \"from <url>\" or \"to <url>\"\n\n\nCloseReasonId\n1: Exact Duplicate - This question covers exactly the same ground as earlier questions on this topic; its answers may be merged with another identical question.\n2: off-topic\n3: subjective\n4: not a real question\n7: too localized\n\n\n\n\n\n\npostlinks.xml\nId\nCreationDate\nPostId\nRelatedPostId\nPostLinkTypeId\n1: Linked\n3: Duplicate\n\n\nusers.xml\nId\nReputation\nCreationDate\nDisplayName\nEmailHash\nLastAccessDate\nWebsiteUrl\nLocation\nAge\nAboutMe\nViews\nUpVotes\nDownVotes\n\n\nvotes.xml\nId\nPostId\nVoteTypeId\n1: AcceptedByOriginator\n2: UpMod\n3: DownMod\n4: Offensive\n5: Favorite - if VoteTypeId = 5 UserId will be populated\n6: Close\n7: Reopen\n8: BountyStart\n9: BountyClose\n10: Deletion\n11: Undeletion\n12: Spam\n13: InformModerator\n\n\nCreationDate\nUserId (only for VoteTypeId 5)\nBountyAmount (only for VoteTypeId 9)\n\n\n\nLas bases de datos se crean conforme se nombran. Se puede utilizar la notación punto o la de diccionario. Las colecciones también.",
"db = client.stackoverflow\ndb = client['stackoverflow']\ndb",
"Las bases de datos están compuestas por un conjunto de colecciones. Cada colección aglutina a un conjunto de objetos (documentos) del mismo tipo, aunque como vimos en teoría, cada documento puede tener un conjunto de atributos diferente.",
"posts = db.posts\nposts",
"Importación de los ficheros CSV. Por ahora creamos una colección diferente para cada uno. Después estudiaremos cómo poder optimizar el acceso usando agregación.",
"import os\nimport os.path as path\nfrom urllib.request import urlretrieve\n\ndef download_csv_upper_dir(baseurl, filename):\n file = path.abspath(path.join(os.getcwd(),os.pardir,filename))\n if not os.path.isfile(file):\n urlretrieve(baseurl + '/' + filename, file)\n\nbaseurl = 'http://neuromancer.inf.um.es:8080/es.stackoverflow/'\ndownload_csv_upper_dir(baseurl, 'Posts.csv')\ndownload_csv_upper_dir(baseurl, 'Users.csv')\ndownload_csv_upper_dir(baseurl, 'Tags.csv')\ndownload_csv_upper_dir(baseurl, 'Comments.csv')\ndownload_csv_upper_dir(baseurl, 'Votes.csv')\n\nimport csv\nfrom datetime import datetime\n\ndef csv_to_mongo(file, coll):\n \"\"\"\n Carga un fichero CSV en Mongo. file especifica el fichero, coll la colección\n dentro de la base de datos, y date_cols las columnas que serán interpretadas\n como fechas.\n \"\"\"\n # Convertir todos los elementos que se puedan a números\n def to_numeric(d):\n try:\n return int(d)\n except ValueError:\n try:\n return float(d)\n except ValueError:\n return d\n \n def to_date(d):\n \"\"\"To ISO Date. If this cannot be converted, return NULL (None)\"\"\"\n try:\n return datetime.strptime(d, \"%Y-%m-%dT%H:%M:%S.%f\")\n except ValueError:\n return None\n \n coll.drop()\n\n with open(file, encoding='utf-8') as f:\n # La llamada csv.reader() crea un iterador sobre un fichero CSV\n reader = csv.reader(f, dialect='excel')\n \n # Se leen las columnas. Sus nombres se usarán para crear las diferentes columnas en la familia\n columns = next(reader)\n \n # Las columnas que contienen 'Date' se interpretan como fechas\n func_to_cols = list(map(lambda c: to_date if 'date' in c.lower() else to_numeric, columns))\n \n docs=[]\n for row in reader:\n row = [func(e) for (func,e) in zip(func_to_cols, row)]\n docs.append(dict(zip(columns, row)))\n coll.insert_many(docs)\n\ncsv_to_mongo('../Posts.csv',db.posts)\n\ncsv_to_mongo('../Users.csv',db.users)\n\ncsv_to_mongo('../Votes.csv',db.votes)\n\ncsv_to_mongo('../Comments.csv',db.comments)\n\ncsv_to_mongo('../Tags.csv',db.tags)\n\nposts.count_documents()",
"El API de colección en Python se puede encontrar aquí: https://api.mongodb.com/python/current/api/pymongo/collection.html. La mayoría de libros y referencias muestran el uso de mongo desde Javascript, ya que el shell de MongoDB acepta ese lenguaje. La sintaxis con respecto a Python cambia un poco, y se puede seguir en el enlace anterior.\nCreación de índices\nPara que el proceso map-reduce y de agregación funcione mejor, voy a crear índices sobre los atributos que se usarán como índice... Ojo, si no se crea las consultas pueden tardar mucho.",
"(\n db.posts.create_index([('Id', pymongo.HASHED)]),\n db.comments.create_index([('Id', pymongo.HASHED)]),\n db.users.create_index([('Id', pymongo.HASHED)])\n)",
"Map-Reduce\nMongodb incluye dos APIs para procesar y buscar documentos: el API de Map-Reduce y el API de agregación. Veremos primero el de Map-Reduce. Manual: https://docs.mongodb.com/manual/aggregation/#map-reduce",
"from bson.code import Code\n\nmap = Code(\n'''\nfunction () {\n emit(this.OwnerUserId, 1);\n}\n''')\n\nreduce = Code(\n'''\nfunction (key, values)\n{\n return Array.sum(values);\n}\n''')\n\nresults = posts.map_reduce(map, reduce, \"posts_by_userid\")\n\nposts_by_userid = db.posts_by_userid\nlist(posts_by_userid.find())",
"Se le puede añadir una etiqueta para especificar sobre qué elementos queremos trabajar (query):\nLa función map_reduce puede llevar añadida una serie de keywords, los mismos especificados en la documentación:\n\nquery: Restringe los datos que se tratan\nsort: Ordena los documentos de entrada por alguna clave\nlimit: Limita el número de resultados\nout: Especifica la colección de salida y otras opciones. Lo veremos después.\netc.\n\nEn el parámetro out se puede especificar en qué colección se quedarán los datos resultado del map-reduce. Por defecto, en la colección origen. (Todos los parámetros aquí: https://docs.mongodb.com/manual/reference/command/mapReduce/#mapreduce-out-cmd). En la operación map_reduce() podemos especificar la colección de salida, pero también podemos añadir un parámetro final out={...}.\nHay varias posibilidades para out:\n\nreplace: Sustituye la colección, si la hubiera, con la especificada (p. ej.: out={ \"replace\" : \"coll\" }.\nmerge: Mezcla la colección existente, sustituyendo los documentos que existan por los generados.\nreduce: Si existe un documento con el mismo _id en la colección, se aplica la función reduce para fusionar ambos documentos y producir un nuevo documento.\n\nVeremos a continuación, al resolver el ejercicio de crear post_comments con map-reduce cómo se utilizan estas posibilidades.\nTambién hay operaciones específicas de la coleción, como count(), groupby() y distinct():",
"db.posts.distinct('Score')",
"EJERCICIO (resuelto): Construir, con el API de Map-Reduce, una colección 'post_comments', donde se añade el campo 'Comments' a cada Post con la lista de todos los comentarios referidos a un Post.\nVeremos la resolución de este ejercicio para que haga de ejemplo para los siguientes a implementar. En primer lugar, una operación map/reduce sólo se puede ejecutar sobre una colección, así que sólo puede contener resultados de la misma. Por lo tanto, con sólo una operación map/reduce no va a ser posible realizar todo el ejercicio.\nAsí, en primer lugar, parece interesante agrupar todos los comentarios que se han producido de un Post en particular. En cada comentario, el atributo PostId marca una referencia al Post al que se refiere.\nEs importante cómo se construyen las operaciones map() y reduce(). Primero, la función map() se ejecutará para todos los documentos (o para todos los que cumplan la condición si se utiliza el modificador query=). Sin embargo, la función reduce() no se ejecutará a no ser que haya más de un elemento asociado a la misma clave.\nPor lo tanto, la salida de la función map() debe ser la misma que la de la función reduce(). En nuestro caso, es un objeto JSON de la forma:\n{ type: 'comment', comments: [ {comentario1, comentario2} ] }\n\nEn el caso de que sólo se ejecute la función map(), nótese cómo el objeto tiene la misma composición, pero con un array de sólo un elemento (comentario): sí mismo.",
"from bson.code import Code\n\ncomments_map = Code('''\nfunction () {\n emit(this.PostId, { type: 'comment', comments: [this]});\n}\n''')\n\ncomments_reduce = Code('''\nfunction (key, values) {\n comments = [];\n values.forEach(function(v) {\n if ('comments' in v)\n comments = comments.concat(v.comments)\n })\n return { type: 'comment', comments: comments };\n}\n''')\n\ndb.comments.map_reduce(comments_map, comments_reduce, \"post_comments\")\n\nlist(db.post_comments.find()[:10])",
"Esto demuestra que en general el esquema de datos en MongoDB no estaría así desde el principio.\nDespués del primer paso de map/reduce, tenemos que construir la colección final que asocia cada Post con sus comentarios. Como hemos construido antes la colección post_comments indizada por el Id del Post, podemos utilizar ahora una ejecución de map/reduce que mezcle los datos en post_comments con los datos en posts.\nLa segunda ejecución de map/reduce la haremos sobre posts, para que el resultado sea completo, incluso para los Posts que no aparecen en comentarios, y por lo tanto tendrán el atributo comments vacío.\nEn este caso, debemos hacer que la función map() produzca una salida de documentos que también están indizados con el atributo Id, y, como sólo hay uno para cada Id, la función reduce() no se ejecutará. Tan sólo se ejecutará para mezclar ambas colecciones, así que la función reduce() tendrá que estar preparada para mezclar objetos de tipo \"comment\" y Posts. En cualquier caso, como se puede ver, es válida también aunque sólo se llame con un objeto de tipo Post. Finalmente, la función map() prepara a cada objeto Post, inicialmente, con una lista de comentarios vacíos",
"posts_map = Code(\"\"\"\nfunction () {\n this.comments = [];\n emit(this.Id, this);\n}\n\"\"\")\n\nposts_reduce = Code(\"\"\"\nfunction (key, values) {\n comments = []; // The set of comments\n obj = {}; // The object to return\n \n values.forEach(function(v) {\n if (v['type'] === 'comment')\n comments = comments.concat(v.comments);\n else // Object\n {\n obj = v;\n // obj.comments will always be there because of the map() operation\n comments = comments.concat(obj.comments);\n }\n })\n \n // Finalize: Add the comments to the object to return\n obj.comments = comments;\n\n return obj;\n}\n\"\"\")\n\ndb.posts.map_reduce(posts_map, posts_reduce, out={'reduce' : 'post_comments'})\n\nlist(db.post_comments.find()[:10])",
"Framework de Agregación\nFramework de agregación: https://docs.mongodb.com/manual/reference/operator/aggregation/. Y aquí una presentación interesante sobre el tema: https://www.mongodb.com/presentations/aggregation-framework-0?jmp=docs&_ga=1.223708571.1466850754.1477658152\n<video style=\"width:100%;\" src=\"https://docs.mongodb.com/manual/_images/agg-pipeline.mp4\" controls> </video>\n\nProyección:",
"respuestas = db['posts'].aggregate( [ {'$project' : { 'Id' : True }}, {'$limit': 20} ])\nlist(respuestas)",
"Lookup!",
"respuestas = posts.aggregate( [\n {'$match': { 'Score' : {'$gte': 40}}},\n {'$lookup': {\n 'from': \"users\", \n 'localField': \"OwnerUserId\",\n 'foreignField': \"Id\",\n 'as': \"owner\"}\n }\n ])\nlist(respuestas)",
"El $lookup genera un array con todos los resultados. El operador $arrayElementAt accede al primer elemento.",
"respuestas = db.posts.aggregate( [\n {'$match': { 'Score' : {'$gte': 40}}},\n {'$lookup': {\n 'from': \"users\", \n 'localField': \"OwnerUserId\",\n 'foreignField': \"Id\",\n 'as': \"owner\"}\n },\n { '$project' :\n {\n 'Id' : True,\n 'Score' : True,\n 'username' : {'$arrayElemAt' : ['$owner.DisplayName', 0]},\n 'owner.DisplayName' : True\n }}\n ])\nlist(respuestas)",
"$unwind también puede usarse. \"Desdobla\" cada fila por cada elemento del array. En este caso, como sabemos que el array sólo contiene un elemento, sólo habrá una fila por fila original, pero sin el array. Finalmente se puede proyectar el campo que se quiera.",
"respuestas = db.posts.aggregate( [\n {'$match': { 'Score' : {'$gte': 40}}},\n {'$lookup': {\n 'from': \"users\", \n 'localField': \"OwnerUserId\",\n 'foreignField': \"Id\",\n 'as': \"owner\"}\n },\n { '$unwind': '$owner'},\n { '$project' : \n {\n 'username': '$owner.DisplayName'\n }\n }\n ])\nlist(respuestas)",
"Ejemplo de realización de la consulta RQ4\nComo ejemplo de consulta compleja con el Framework de Agregación, adjunto una posible solución a la consulta RQ4:",
"RQ4 = db.posts.aggregate( [\n { \"$match\" : {\"PostTypeId\": 2}},\n {'$lookup': {\n 'from': \"posts\", \n 'localField': \"ParentId\",\n 'foreignField': \"Id\",\n 'as': \"question\"\n }\n },\n {\n '$unwind' : '$question'\n },\n {\n '$project' : { 'OwnerUserId': True, \n 'OP' : '$question.OwnerUserId'\n }\n },\n {\n '$group' : {'_id' : {'min' : { '$min' : ['$OwnerUserId' , '$OP'] },\n 'max' : { '$max' : ['$OwnerUserId' , '$OP'] }},\n 'pairs' : {'$addToSet' : { '0q': '$OP', '1a': '$OwnerUserId'}}\n }\n },\n {\n '$project': {\n 'pairs' : True,\n 'npairs' : { '$size' : '$pairs'}\n }\n },\n {\n '$match' : { 'npairs' : { '$eq' : 2}}\n }\n ])\nRQ4 = list(RQ4)\nRQ4",
"La explicación es como sigue:\n\nSe eligen sólo las respuestas\nSe accede a la tabla posts para recuperar los datos de la pregunta\nA continuación se proyectan sólo el usuario que pregunta y el que hace la respuesta\nEl paso más imaginativo es el de agrupación. Lo que se intenta es que ambos pares de usuarios que están relacionados como preguntante -> respondiente y viceversa, caigan en la misma clave. Por ello, se coge el máximo y el mínimo de ambos identificadores de usuarios y se construye una clave con ambos números en las mismas posiciones. Así, ambas combinaciones de usuario que pregunta y que responde caerán en la misma clave. También se usa un conjunto (en pairs), y sólo se añadirá una vez las posibles combinaciones iguales de preguntador/respondiente.\nSólo nos interesan aquellas tuplas cuyo tamaño del conjunto de pares de pregunta/respuesta sea igual a dos (en un elemento uno de los dos usuarios habrá preguntado y el otro habrá respondido y en el otro viceversa).\n\nLa implementación en Map-Reduce se puede realizar con la misma idea.\nEn el caso de que queramos tener como referencia las preguntas y respuestas a las que se refiere la conversación, se puede añadir un campo más que guarde todas las preguntas junto con sus respuestas consideradas",
"RQ4 = db.posts.aggregate( [\n {'$match': { 'PostTypeId' : 2}},\n {'$lookup': {\n 'from': \"posts\", \n 'localField': \"ParentId\",\n 'foreignField': \"Id\",\n 'as': \"question\"}\n },\n {\n '$unwind' : '$question'\n },\n {\n '$project' : {'OwnerUserId': True,\n 'QId' : '$question.Id',\n 'AId' : '$Id',\n 'OP' : '$question.OwnerUserId'\n }\n },\n {\n '$group' : {'_id' : {'min' : { '$min' : ['$OwnerUserId' , '$OP'] },\n 'max' : { '$max' : ['$OwnerUserId' , '$OP'] }},\n 'pairs' : {'$addToSet' : { '0q':'$OP', '1a': '$OwnerUserId'}},\n 'considered_pairs' : { '$push' : {'QId' : '$QId', 'AId' : '$AId'}}\n }\n },\n {\n '$project': {\n 'pairs' : True,\n 'npairs' : { '$size' : '$pairs'},\n 'considered_pairs' : True\n }\n },\n {\n '$match' : { 'npairs' : { '$eq' : 2}}\n }\n ])\nRQ4 = list(RQ4)\nRQ4\n\n(db.posts.find_one({'Id': 238}), db.posts.find_one({'Id': 243}),\ndb.posts.find_one({'Id': 222}), db.posts.find_one({'Id': 223}))",
"Ejemplo de consulta: Tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta\nVeamos cómo calcular el tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta. En este caso se puede utilizar las respuestas para apuntar a qué pregunta correspondieron. No se considerarán pues las preguntas que no tienen respuesta, lo cual es razonable. Sin embargo, la función map debe guardar también las preguntas para poder calcular el tiempo menor (la primera repuesta).",
"from bson.code import Code\n\n# La función map agrupará todas las respuestas, pero también necesita las \nmapcode = Code(\"\"\"\nfunction () {\n if (this.PostTypeId == 2)\n emit(this.ParentId, {q: null, a: {Id: this.Id, CreationDate: this.CreationDate}, diff: null})\n else if (this.PostTypeId == 1)\n emit(this.Id, {q: {Id: this.Id, CreationDate: this.CreationDate}, a: null, diff: null})\n}\n\"\"\")\n\nreducecode = Code(\"\"\"\nfunction (key, values) {\n q = null // Pregunta\n a = null // Respuesta con la fecha más cercana a la pregunta\n \n values.forEach(function(v) {\n if (v.q != null) // Pregunta\n q = v.q\n if (v.a != null) // Respuesta\n {\n if (a == null || v.a.CreationDate < a.CreationDate)\n a = v.a\n }\n })\n\n mindiff = null\n if (q != null && a != null)\n mindiff = a.CreationDate - q.CreationDate;\n\n return {q: q, a: a, diff: mindiff}\n}\n\"\"\")\n\ndb.posts.map_reduce(mapcode, reducecode, \"min_response_time\")\n\nmrt = list(db.min_response_time.find())\n\nfrom pandas.io.json import json_normalize\n\ndf = json_normalize(mrt)\n\ndf.index=df[\"_id\"]\n\ndf\n\ndf['value.diff'].plot()",
"Esto sólo calcula el tiempo mínimo de cada pregunta a su respuesta. Después habría que aplicar lo visto en otros ejemplos para calcular la media. Con agregación, a continuación, sí que se puede calcular la media de forma relativament sencilla:",
"min_answer_time = db.posts.aggregate([\n {\"$match\" : {\"PostTypeId\" : 2}},\n {\n '$group' : {'_id' : '$ParentId',\n # 'answers' : { '$push' : {'Id' : \"$Id\", 'CreationDate' : \"$CreationDate\"}},\n 'min' : {'$min' : \"$CreationDate\"}\n }\n },\n { \"$lookup\" : {\n 'from': \"posts\", \n 'localField': \"_id\",\n 'foreignField': \"Id\",\n 'as': \"post\"}\n },\n { \"$unwind\" : \"$post\"},\n {\"$project\" :\n {\"_id\" : True,\n \"min\" : True,\n #\"post\" : True,\n \"diff\" : {\"$subtract\" : [\"$min\", \"$post.CreationDate\"]}}\n },\n # { \"$sort\" : {'_id' : 1} }\n {\n \"$group\" : {\n \"_id\" : None,\n \"avg\" : { \"$avg\" : \"$diff\"}\n }\n }\n])\nmin_answer_time = list(min_answer_time)\nmin_answer_time",
"EJERCICIO: Con Map-Reduce, construir las colecciones que asocian un usuario con sus tags y los tags con los usuarios que las utilizan (E1).\nEJERCICIO: Con el Framework de Agregación, generar la colección StackOverflowFacts vista en la sesión 2 (E2).\nEJERCICIO: Con Map-Reduce, implementar la consulta RQ3 de la sesión 2.\nEJERCICIO (difícil, opcional): Con Agregación, calcular, enla tabla StackOverflowFacts la media de tiempo que pasa desde que los usuarios se registran hasta que publican su primera pregunta."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/sdk/sdk_automl_image_object_detection_online_export_edge.ipynb | apache-2.0 | [
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex SDK: AutoML training image object detection model for export to edge\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online_export_edge.ipynb\">\n Open in Google Cloud Notebooks\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex SDK to create image object detection models to export as an Edge model using a Google Cloud AutoML model.\nDataset\nThe dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.\nObjective\nIn this tutorial, you create a AutoML image object detection model from a Python script using the Vertex SDK, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the gcloud command-line tool or online using the Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nExport the Edge model from the Model resource to Cloud Storage.\nDownload the model locally.\nMake a local prediction.\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.",
"import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG",
"Install the latest GA version of google-cloud-storage library as well.",
"! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.environ[\"IS_TESTING\"]:\n ! pip3 install --upgrade tensorflow $USER_FLAG",
"Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.",
"import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.",
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_NAME",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants",
"import google.cloud.aiplatform as aip",
"Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.",
"aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)",
"Tutorial\nNow you are ready to start creating your own AutoML image object detection model.\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.",
"IMPORT_FILE = \"gs://cloud-samples-data/vision/salads.csv\"",
"Quick peek at your data\nThis tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.",
"if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head",
"Create the Dataset\nNext, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:\n\ndisplay_name: The human readable name for the Dataset resource.\ngcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.\nimport_schema_uri: The data labeling schema for the data items.\n\nThis operation may take several minutes.",
"dataset = aip.ImageDataset.create(\n display_name=\"Salads\" + \"_\" + TIMESTAMP,\n gcs_source=[IMPORT_FILE],\n import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,\n)\n\nprint(dataset.resource_name)",
"Create and run training pipeline\nTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.\nCreate training pipeline\nAn AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the TrainingJob resource.\nprediction_type: The type task to train the model for.\nclassification: An image classification model.\nobject_detection: An image object detection model.\nmulti_label: If a classification task, whether single (False) or multi-labeled (True).\nmodel_type: The type of model for deployment.\nCLOUD: Deployment on Google Cloud\nCLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.\nCLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.\nMOBILE_TF_VERSATILE_1: Deployment on an edge device.\nMOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.\nMOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.\nbase_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.\n\nThe instantiated object is the DAG (directed acyclic graph) for the training job.",
"dag = aip.AutoMLImageTrainingJob(\n display_name=\"salads_\" + TIMESTAMP,\n prediction_type=\"object_detection\",\n multi_label=False,\n model_type=\"MOBILE_TF_LOW_LATENCY_1\",\n base_model=None,\n)\n\nprint(dag)",
"Run the training pipeline\nNext, you run the DAG to start the training job by invoking the method run, with the following parameters:\n\ndataset: The Dataset resource to train the model.\nmodel_display_name: The human readable name for the trained model.\ntraining_fraction_split: The percentage of the dataset to use for training.\ntest_fraction_split: The percentage of the dataset to use for test (holdout data).\nvalidation_fraction_split: The percentage of the dataset to use for validation.\nbudget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).\ndisable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.\n\nThe run method when completed returns the Model resource.\nThe execution of the training pipeline will take upto 20 minutes.",
"model = dag.run(\n dataset=dataset,\n model_display_name=\"salads_\" + TIMESTAMP,\n training_fraction_split=0.8,\n validation_fraction_split=0.1,\n test_fraction_split=0.1,\n budget_milli_node_hours=20000,\n disable_early_stopping=False,\n)",
"Review model evaluation scores\nAfter your model has finished training, you can review the evaluation scores for it.\nFirst, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.",
"# Get model resource ID\nmodels = aip.Model.list(filter=\"display_name=salads_\" + TIMESTAMP)\n\n# Get a reference to the Model Service client\nclient_options = {\"api_endpoint\": f\"{REGION}-aiplatform.googleapis.com\"}\nmodel_service_client = aip.gapic.ModelServiceClient(client_options=client_options)\n\nmodel_evaluations = model_service_client.list_model_evaluations(\n parent=models[0].resource_name\n)\nmodel_evaluation = list(model_evaluations)[0]\nprint(model_evaluation)",
"Export as Edge model\nYou can export an AutoML image object detection model as a Edge model which you can then custom deploy to an edge device or download locally. Use the method export_model() to export the model to Cloud Storage, which takes the following parameters:\n\nartifact_destination: The Cloud Storage location to store the SavedFormat model artifacts to.\nexport_format_id: The format to save the model format as. For AutoML image object detection there is just one option:\ntf-saved-model: TensorFlow SavedFormat for deployment to a container.\ntflite: TensorFlow Lite for deployment to an edge or mobile device.\nedgetpu-tflite: TensorFlow Lite for TPU\ntf-js: TensorFlow for web client\n\ncoral-ml: for Coral devices\n\n\nsync: Whether to perform operational sychronously or asynchronously.",
"response = model.export_model(\n artifact_destination=BUCKET_NAME, export_format_id=\"tflite\", sync=True\n)\n\nmodel_package = response[\"artifactOutputUri\"]",
"Download the TFLite model artifacts\nNow that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage.",
"! gsutil ls $model_package\n# Download the model artifacts\n! gsutil cp -r $model_package tflite\n\ntflite_path = \"tflite/model.tflite\"",
"Instantiate a TFLite interpreter\nThe TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows:\n\nInstantiate an TFLite interpreter for the TFLite model.\nInstruct the interpreter to allocate input and output tensors for the model.\nGet detail information about the models input and output tensors that will need to be known for prediction.",
"import tensorflow as tf\n\ninterpreter = tf.lite.Interpreter(model_path=tflite_path)\ninterpreter.allocate_tensors()\n\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\ninput_shape = input_details[0][\"shape\"]\n\nprint(\"input tensor shape\", input_shape)",
"Get test item\nYou will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.",
"test_items = ! gsutil cat $IMPORT_FILE | head -n1\ntest_item = test_items[0].split(\",\")[0]\n\nwith tf.io.gfile.GFile(test_item, \"rb\") as f:\n content = f.read()\ntest_image = tf.io.decode_jpeg(content)\nprint(\"test image shape\", test_image.shape)\n\ntest_image = tf.image.resize(test_image, (224, 224))\nprint(\"test image shape\", test_image.shape, test_image.dtype)\n\ntest_image = tf.cast(test_image, dtype=tf.uint8).numpy()",
"Make a prediction with TFLite model\nFinally, you do a prediction using your TFLite model, as follows:\n\nConvert the test image into a batch of a single image (np.expand_dims)\nSet the input tensor for the interpreter to your batch of a single image (data).\nInvoke the interpreter.\nRetrieve the softmax probabilities for the prediction (get_tensor).\nDetermine which label had the highest probability (np.argmax).",
"import numpy as np\n\ndata = np.expand_dims(test_image, axis=0)\n\ninterpreter.set_tensor(input_details[0][\"index\"], data)\n\ninterpreter.invoke()\n\nsoftmax = interpreter.get_tensor(output_details[0][\"index\"])\n\nlabel = np.argmax(softmax)\n\nprint(label)",
"Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket",
"delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline trainig job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom trainig job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AtmaMani/pyChakras | python_crash_course/python_cheat_sheet_2.ipynb | mit | [
"Python cheat sheet - iterations\nTable of contents\n - Functions\n - Classes\n - Exception handling\nFunctions\nSpecify optional parameters in the end. Specify the default values for optional parameters with = value notation\ndef func_name(arg1, arg2=None):\n operations\n return value",
"def func_add_numbers(num1, num2=10):\n return (num1 + num2)\n\nfunc_add_numbers(2)\n\nfunc_add_numbers(2,34)\n\nfunc_add_numbers()",
"Classes\nEverything is an object in Python including native types. You define class names with camel casing.\nYou define the constructor with special name __init__(). The fields (private) are denoted with _variable_name specification and properties are decorated with @property decorator.\nFields and properties are accessed within the class using self.name notation. This helps differentiate a class field / property from a local variable or method argument of the same name.\nA simple class\nclass MyClass:\n _local_variables = \"value\"\n\n def __init__(self, args): #constructor\n statements\n self._local_variables = args # assign values to fields\n\n def func_1(self, args):\n statements\n\nYou use this method by instantiating an object.\nobj1 = myClass(args_defined_in_constructor)",
"# Define a class to hold a satellite or aerial imagery file. Its properties give information\n# such as location of the ground, area, dimensions, spatial and spectral resolution etc.\n\nclass ImageryObject:\n _default_gsd = 5.0\n \n def __init__(self, file_path):\n self._file_path = file_path\n self._gps_location = (3,4)\n \n @property\n def bands(self):\n #count number of bands\n count = 3\n return count\n \n @property\n def gsd(self):\n # logic to calculate the ground sample distance\n gsd = 10.0\n return gsd\n \n @property\n def address(self):\n # logic to reverse geocode the self._gps_location to get address\n # reverse geocode self._gps_location\n address = \"123 XYZ Street\"\n return address\n \n #class methods\n def display(self):\n #logic to display picture\n print(\"image is displayed\")\n \n def shuffle_bands(self):\n #logic to shift RGB combination\n print(\"shifting pands\")\n self.display()\n\n# class instantiation\nimg1 = ImageryObject(\"user\\img\\file.img\") #pass value to constructor\n\nimg1.address\n\nimg1._default_gsd\n\nimg1._gps_location\n\nimg1.shuffle_bands()\n\n# Get help on any object. Only public methods, properties are displayed.\n# fields are private, properties are public. Class variables beginning with _ are private fields.\nhelp(img1)",
"Exception handling\nExceptions are classes. You can define your own by inheriting from Exception class.\ntry:\n statements\n\nexcept Exception_type1 as e1:\n handling statements\n\nexcept Exception_type2 as e2:\n specific handling statements\n\nexcept Exception as generic_ex:\n generic handling statements\n\nelse:\n some more statements\n\nfinally:\n default statements which will always be executed",
"try:\n img2 = ImageryObject(\"user\\img\\file2.img\")\n img2.display()\nexcept:\n print(\"something bad happened\")\n\ntry:\n img2 = ImageryObject(\"user\\img\\file2.img\")\n img2.display()\nexcept:\n print(\"something bad happened\")\nelse:\n print(\"else block\")\nfinally:\n print(\"finally block\")\n\ntry:\n img2 = ImageryObject()\n img2.display()\nexcept:\n print(\"something bad happened\")\nelse:\n print(\"else block\")\nfinally:\n print(\"finally block\")\n\ntry:\n img2 = ImageryObject()\n img2.display()\n\nexcept Exception as ex:\n print(\"something bad happened\")\n print(\"exactly what whent bad? : \" + str(ex))\n\ntry:\n img2 = ImageryObject('path')\n img2.dddisplay()\n\nexcept TypeError as terr:\n print(\"looks like you forgot a parameter\")\nexcept Exception as ex:\n print(\"nope, it went worng here: \" + str(ex))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
paoloRais/lightfm | examples/quickstart/quickstart.ipynb | apache-2.0 | [
"Quickstart\nIn this example, we'll build an implicit feedback recommender using the Movielens 100k dataset (http://grouplens.org/datasets/movielens/100k/).\nThe code behind this example is available as a Jupyter notebook\nLightFM includes functions for getting and processing this dataset, so obtaining it is quite easy.",
"import numpy as np\n\nfrom lightfm.datasets import fetch_movielens\n\ndata = fetch_movielens(min_rating=5.0)",
"This downloads the dataset and automatically pre-processes it into sparse matrices suitable for further calculation. In particular, it prepares the sparse user-item matrices, containing positive entries where a user interacted with a product, and zeros otherwise.\nWe have two such matrices, a training and a testing set. Both have around 1000 users and 1700 items. We'll train the model on the train matrix but test it on the test matrix.",
"print(repr(data['train']))\nprint(repr(data['test']))",
"We need to import the model class to fit the model:",
"from lightfm import LightFM",
"We're going to use the WARP (Weighted Approximate-Rank Pairwise) model. WARP is an implicit feedback model: all interactions in the training matrix are treated as positive signals, and products that users did not interact with they implicitly do not like. The goal of the model is to score these implicit positives highly while assigining low scores to implicit negatives.\nModel training is accomplished via SGD (stochastic gradient descent). This means that for every pass through the data --- an epoch --- the model learns to fit the data more and more closely. We'll run it for 10 epochs in this example. We can also run it on multiple cores, so we'll set that to 2. (The dataset in this example is too small for that to make a difference, but it will matter on bigger datasets.)",
"model = LightFM(loss='warp')\n%time model.fit(data['train'], epochs=30, num_threads=2)",
"Done! We should now evaluate the model to see how well it's doing. We're most interested in how good the ranking produced by the model is. Precision@k is one suitable metric, expressing the percentage of top k items in the ranking the user has actually interacted with. lightfm implements a number of metrics in the evaluation module.",
"from lightfm.evaluation import precision_at_k",
"We'll measure precision in both the train and the test set.",
"print(\"Train precision: %.2f\" % precision_at_k(model, data['train'], k=5).mean())\nprint(\"Test precision: %.2f\" % precision_at_k(model, data['test'], k=5).mean())",
"Unsurprisingly, the model fits the train set better than the test set.\nFor an alternative way of judging the model, we can sample a couple of users and get their recommendations. To make predictions for given user, we pass the id of that user and the ids of all products we want predictions for into the predict method.",
"def sample_recommendation(model, data, user_ids):\n \n\n n_users, n_items = data['train'].shape\n\n for user_id in user_ids:\n known_positives = data['item_labels'][data['train'].tocsr()[user_id].indices]\n \n scores = model.predict(user_id, np.arange(n_items))\n top_items = data['item_labels'][np.argsort(-scores)]\n \n print(\"User %s\" % user_id)\n print(\" Known positives:\")\n \n for x in known_positives[:3]:\n print(\" %s\" % x)\n\n print(\" Recommended:\")\n \n for x in top_items[:3]:\n print(\" %s\" % x)\n \nsample_recommendation(model, data, [3, 25, 450]) "
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jo-tez/aima-python | csp.ipynb | mit | [
"CONSTRAINT SATISFACTION PROBLEMS\nThis IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this notebook includes a brief summary of the main topics, familiarity with the material present in the book is expected. We will look at some visualizations and solve some of the CSP problems described in the book. Let us import everything from the csp module to get started.",
"from csp import *\nfrom notebook import psource, pseudocode, plot_NQueens\n%matplotlib inline\n\n# Hide warnings in the matplotlib sections\nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"CONTENTS\n\nOverview\nGraph Coloring\nN-Queens\nAC-3\nBacktracking Search\nTree CSP Solver\nGraph Coloring Visualization\nN-Queens Visualization\n\nOVERVIEW\nCSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.",
"psource(CSP)",
"The _ init _ method parameters specify the CSP. Variables can be passed as a list of strings or integers. Domains are passed as dict (dictionary datatpye) where \"key\" specifies the variables and \"value\" specifies the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list of its values which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.\nGRAPH COLORING\nWe use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter and returns it as a value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.",
"s = UniversalDict(['R','G','B'])\ns[5]",
"For our CSP we also need to define a constraint function f(A, a, B, b). In this, we need to ensure that the neighbors don't have the same color. This is defined in the function different_values_constraint of the module.",
"psource(different_values_constraint)",
"The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows us to take input in the form of strings and return a Dict of a form that is compatible with the CSP Class.",
"%pdoc parse_neighbors",
"The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables are the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. Australia, USA and France are three CSPs that have been created using MapColoringCSP. Australia corresponds to Figure 6.1 in the book.",
"psource(MapColoringCSP)\n\naustralia, usa, france",
"N-QUEENS\nThe N-queens puzzle is the problem of placing N chess queens on an N×N chessboard in a way such that no two queens threaten each other. Here N is a natural number. Like the graph coloring problem, NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit this particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed to the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.",
"psource(queen_constraint)",
"The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the many popular techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve, the CSP assign and unassign are modified to record conflicts. More details about the structures: rows, downs, ups which help in recording conflicts are explained in the docstring.",
"psource(NQueensCSP)",
"The _ init _ method takes only one parameter n i.e. the size of the problem. To create an instance, we just pass the required value of n into the constructor.",
"eight_queens = NQueensCSP(8)",
"We have defined our CSP. \nNow, we need to solve this.\nMin-conflicts\nAs stated above, the min_conflicts algorithm is an efficient method to solve such a problem.\n<br>\nIn the start, all the variables of the CSP are randomly initialized. \n<br>\nThe algorithm then randomly selects a variable that has conflicts and violates some constraints of the CSP.\n<br>\nThe selected variable is then assigned a value that minimizes the number of conflicts.\n<br>\nThis is a simple stochastic algorithm which works on a principle similar to Hill-climbing.\nThe conflicting state is repeatedly changed into a state with fewer conflicts in an attempt to reach an approximate solution.\n<br>\nThis algorithm sometimes benefits from having a good initial assignment.\nUsing greedy techniques to get a good initial assignment and then using min_conflicts to solve the CSP can speed up the procedure dramatically, especially for CSPs with a large state space.",
"psource(min_conflicts)",
"Let's use this algorithm to solve the eight_queens CSP.",
"solution = min_conflicts(eight_queens)",
"This is indeed a valid solution. \n<br>\nnotebook.py has a helper function to visualize the solution space.",
"plot_NQueens(solution)",
"Lets' see if we can find a different solution.",
"eight_queens = NQueensCSP(8)\nsolution = min_conflicts(eight_queens)\nplot_NQueens(solution)",
"The solution is a bit different this time. \nRunning the above cell several times should give you different valid solutions.\n<br>\nIn the search.ipynb notebook, we will see how NQueensProblem can be solved using a heuristic search method such as uniform_cost_search and astar_search.\nHelper Functions\nWe will now implement a few helper functions that will allow us to visualize the Coloring Problem; we'll also make a few modifications to the existing classes and functions for additional record keeping. To begin, we modify the assign and unassign methods in the CSP in order to add a copy of the assignment to the assignment_history. We name this new class as InstruCSP; it will allow us to see how the assignment evolves over time.",
"import copy\nclass InstruCSP(CSP):\n \n def __init__(self, variables, domains, neighbors, constraints):\n super().__init__(variables, domains, neighbors, constraints)\n self.assignment_history = []\n \n def assign(self, var, val, assignment):\n super().assign(var,val, assignment)\n self.assignment_history.append(copy.deepcopy(assignment))\n \n def unassign(self, var, assignment):\n super().unassign(var,assignment)\n self.assignment_history.append(copy.deepcopy(assignment))",
"Next, we define make_instru which takes an instance of CSP and returns an instance of InstruCSP.",
"def make_instru(csp):\n return InstruCSP(csp.variables, csp.domains, csp.neighbors, csp.constraints)",
"We will now use a graph defined as a dictionary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their values are the corresponding nodes they are connected to.",
"neighbors = {\n 0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4], \n 1: [12, 12, 14, 14], \n 2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14], \n 3: [20, 8, 19, 12, 20, 19, 8, 12], \n 4: [11, 0, 18, 5, 18, 5, 11, 0], \n 5: [4, 4], \n 6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14], \n 7: [13, 16, 13, 16], \n 8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14], \n 9: [20, 15, 19, 16, 15, 19, 20, 16], \n 10: [17, 11, 2, 11, 17, 2], \n 11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4], \n 12: [8, 3, 8, 14, 1, 3, 1, 14], \n 13: [7, 15, 18, 15, 16, 7, 18, 16], \n 14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12], \n 15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16], \n 16: [7, 15, 13, 9, 7, 13, 15, 9], \n 17: [10, 2, 2, 10], \n 18: [15, 0, 13, 4, 0, 15, 13, 4], \n 19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9], \n 20: [3, 19, 9, 19, 3, 9]\n}",
"Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.",
"coloring_problem = MapColoringCSP('RGBY', neighbors)\n\ncoloring_problem1 = make_instru(coloring_problem)",
"CONSTRAINT PROPAGATION\nAlgorithms that solve CSPs have a choice between searching and or doing a constraint propagation, a specific type of inference.\nThe constraints can be used to reduce the number of legal values for another variable, which in turn can reduce the legal values for some other variable, and so on. \n<br>\nConstraint propagation tries to enforce local consistency.\nConsider each variable as a node in a graph and each binary constraint as an arc.\nEnforcing local consistency causes inconsistent values to be eliminated throughout the graph, \na lot like the GraphPlan algorithm in planning, where mutex links are removed from a planning graph.\nThere are different types of local consistencies:\n1. Node consistency\n2. Arc consistency\n3. Path consistency\n4. K-consistency\n5. Global constraints\nRefer section 6.2 in the book for details.\n<br>\nAC-3\nBefore we dive into AC-3, we need to know what arc-consistency is.\n<br>\nA variable $X_i$ is arc-consistent with respect to another variable $X_j$ if for every value in the current domain $D_i$ there is some value in the domain $D_j$ that satisfies the binary constraint on the arc $(X_i, X_j)$.\n<br>\nA network is arc-consistent if every variable is arc-consistent with every other variable.\n<br>\nAC-3 is an algorithm that enforces arc consistency.\nAfter applying AC-3, either every arc is arc-consistent, or some variable has an empty domain, indicating that the CSP cannot be solved.\nLet's see how AC3 is implemented in the module.",
"psource(AC3)",
"AC3 also employs a helper function revise.",
"psource(revise)",
"AC3 maintains a queue of arcs to consider which initially contains all the arcs in the CSP.\nAn arbitrary arc $(X_i, X_j)$ is popped from the queue and $X_i$ is made arc-consistent with respect to $X_j$.\n<br>\nIf in doing so, $D_i$ is left unchanged, the algorithm just moves to the next arc, \nbut if the domain $D_i$ is revised, then we add all the neighboring arcs $(X_k, X_i)$ to the queue.\n<br>\nWe repeat this process and if at any point, the domain $D_i$ is reduced to nothing, then we know the whole CSP has no consistent solution and AC3 can immediately return failure.\n<br>\nOtherwise, we keep removing values from the domains of variables until the queue is empty.\nWe finally get the arc-consistent CSP which is faster to search because the variables have smaller domains.\nLet's see how AC3 can be used.\n<br>\nWe'll first define the required variables.",
"neighbors = parse_neighbors('A: B; B: ')\ndomains = {'A': [0, 1, 2, 3, 4], 'B': [0, 1, 2, 3, 4]}\nconstraints = lambda X, x, Y, y: x % 2 == 0 and (x + y) == 4 and y % 2 != 0\nremovals = []",
"We'll now define a CSP object.",
"csp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)\n\nAC3(csp, removals=removals)",
"This configuration is inconsistent.",
"constraints = lambda X, x, Y, y: (x % 2) == 0 and (x + y) == 4\nremovals = []\ncsp = CSP(variables=None, domains=domains, neighbors=neighbors, constraints=constraints)\n\nAC3(csp,removals=removals)",
"This configuration is consistent.\nBACKTRACKING SEARCH\nThe main issue with using Naive Search Algorithms to solve a CSP is that they can continue to expand obviously wrong paths; whereas, in backtracking search, we check the constraints as we go and we deal with only one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and a few other optional parameters which can be used to speed it up further. The function returns the correct assignment if it satisfies the goal. However, we will discuss these later. For now, let us solve our coloring_problem1 with backtracking_search.",
"result = backtracking_search(coloring_problem1)\n\nresult # A dictonary of assignments.",
"Let us also check the number of assignments made.",
"coloring_problem1.nassigns",
"Now, let us check the total number of assignments and unassignments, which would be the length of our assignment history. We can see it by using the command below.",
"len(coloring_problem1.assignment_history)",
"Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out the methods in the CSP class that help to make this work. \nThe first one is select_unassigned_variable. It takes in, as a parameter, a function that helps in deciding the order in which the variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the least legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by the number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.",
"psource(mrv)\n\npsource(num_legal_values)\n\npsource(CSP.nconflicts)",
"Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out least number of values in the remaining variables. The intuition behind selecting the lcv is that it allows a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, and it's better to try the ones that are likely. So for vars, we face the hard ones first.",
"psource(lcv)",
"Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can find out more about these by looking up the source code.\nNow let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance 'usa' for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.",
"solve_simple = copy.deepcopy(usa)\nsolve_parameters = copy.deepcopy(usa)\n\nbacktracking_search(solve_simple)\nbacktracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac)\n\nsolve_simple.nassigns\n\nsolve_parameters.nassigns",
"TREE CSP SOLVER\nThe tree_csp_solver function (Figure 6.11 in the book) can be used to solve problems whose constraint graph is a tree. Given a CSP, with neighbors forming a tree, it returns an assignment that satisfies the given constraints. The algorithm works as follows:\nFirst it finds the topological sort of the tree. This is an ordering of the tree where each variable/node comes after its parent in the tree. The function that accomplishes this is topological_sort; it builds the topological sort using the recursive function build_topological. That function is an augmented DFS (Depth First Search), where each newly visited node of the tree is pushed on a stack. The stack in the end holds the variables topologically sorted.\nThen the algorithm makes arcs between each parent and child consistent. Arc-consistency between two variables, a and b, occurs when for every possible value of a there is an assignment in b that satisfies the problem's constraints. If such an assignment cannot be found, the problematic value is removed from a's possible values. This is done with the use of the function make_arc_consistent, which takes as arguments a variable Xj and its parent, and makes the arc between them consistent by removing any values from the parent which do not allow for a consistent assignment in Xj.\nIf an arc cannot be made consistent, the solver fails. If every arc is made consistent, we move to assigning values.\nFirst we assign a random value to the root from its domain and then we assign values to the rest of the variables. Since the graph is now arc-consistent, we can simply move from variable to variable picking any remaining consistent values. At the end we are left with a valid assignment. If at any point though we find a variable where no consistent value is left in its domain, the solver fails.\nRun the cell below to see the implementation of the algorithm:",
"psource(tree_csp_solver)",
"We will now use the above function to solve a problem. More specifically, we will solve the problem of coloring Australia's map. We have two colors at our disposal: Red and Blue. As a reminder, this is the graph of Australia:\n\"SA: WA NT Q NSW V; NT: WA Q; NSW: Q V; T: \"\nUnfortunately, as you can see, the above is not a tree. However, if we remove SA, which has arcs to WA, NT, Q, NSW and V, we are left with a tree (we also remove T, since it has no in-or-out arcs). We can now solve this using our algorithm. Let's define the map coloring problem at hand:",
"australia_small = MapColoringCSP(list('RB'),\n 'NT: WA Q; NSW: Q V')",
"We will input australia_small to the tree_csp_solver and print the given assignment.",
"assignment = tree_csp_solver(australia_small)\nprint(assignment)",
"WA, Q and V got painted with the same color and NT and NSW got painted with the other.\nGRAPH COLORING VISUALIZATION\nNext, we define some functions to create the visualisation from the assignment_history of coloring_problem1. The readers need not concern themselves with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these, visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as graphs that need to be colored or as constraint graphs for this problem. If interested you can check out a fairly simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.",
"%matplotlib inline\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport time",
"The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which returns such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. The example below will elaborate it further. If this sounds confusing, don't worry. This is not part of the core material and our only goal is to help you visualize how the process works.",
"def make_update_step_function(graph, instru_csp):\n \n #define a function to draw the graphs\n def draw_graph(graph):\n \n G=nx.Graph(graph)\n pos = nx.spring_layout(G,k=0.15)\n return (G, pos)\n \n G, pos = draw_graph(graph)\n \n def update_step(iteration):\n # here iteration is the index of the assignment_history we want to visualize.\n current = instru_csp.assignment_history[iteration]\n # We convert the particular assignment to a default dict so that the color for nodes which \n # have not been assigned defaults to black.\n current = defaultdict(lambda: 'Black', current)\n\n # Now we use colors in the list and default to black otherwise.\n colors = [current[node] for node in G.node.keys()]\n # Finally drawing the nodes.\n nx.draw(G, pos, node_color=colors, node_size=500)\n\n labels = {label:label for label in G.node}\n # Labels shifted by offset so that nodes don't overlap\n label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}\n nx.draw_networkx_labels(G, label_pos, labels, font_size=20)\n\n # display the graph\n plt.show()\n\n return update_step # <-- this is a function\n\ndef make_visualize(slider):\n ''' Takes an input a slider and returns \n callback function for timer and animation\n '''\n \n def visualize_callback(Visualize, time_step):\n if Visualize is True:\n for i in range(slider.min, slider.max + 1):\n slider.value = i\n time.sleep(float(time_step))\n \n return visualize_callback\n ",
"Finally let us plot our problem. We first use the function below to obtain a step function.",
"step_func = make_update_step_function(neighbors, coloring_problem1)",
"Next, we set the canvas size.",
"matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)",
"Finally, our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the colors change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds (upto one second) for each time step.",
"import ipywidgets as widgets\nfrom IPython.display import display\n\niteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assignment_history)-1, step=1, value=0)\nw=widgets.interactive(step_func,iteration=iteration_slider)\ndisplay(w)\n\nvisualize_callback = make_visualize(iteration_slider)\n\nvisualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\ntime_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n\na = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\ndisplay(a)",
"N-QUEENS VISUALIZATION\nJust like the Graph Coloring Problem, we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing a 3 in any position where there is a conflict.",
"def label_queen_conflicts(assignment,grid):\n ''' Mark grid with queens that are under conflict. '''\n for col, row in assignment.items(): # check each queen for conflict\n conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items() \n if (temp_row == row and temp_col != col\n or (temp_row+temp_col == row+col and temp_col != col)\n or (temp_row-temp_col == row-col and temp_col != col)}\n \n # Place a 3 in positions where this is a conflict\n for col, row in conflicts.items():\n grid[col][row] = 3\n\n return grid\n\ndef make_plot_board_step_function(instru_csp):\n '''ipywidgets interactive function supports\n single parameter as input. This function\n creates and return such a function by taking\n in input other parameters.\n '''\n n = len(instru_csp.variables)\n \n \n def plot_board_step(iteration):\n ''' Add Queens to the Board.'''\n data = instru_csp.assignment_history[iteration]\n \n grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]\n grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.\n \n # color map of fixed colors\n cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])\n bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).\n norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)\n \n fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)\n\n plt.axis('off')\n fig.axes.get_xaxis().set_visible(False)\n fig.axes.get_yaxis().set_visible(False)\n\n # Place the Queens Unicode Symbol\n for col, row in data.items():\n fig.axes.text(row, col, u\"\\u265B\", va='center', ha='center', family='Dejavu Sans', fontsize=32)\n plt.show()\n \n return plot_board_step",
"Now let us visualize a solution obtained via backtracking. We make use of the previosuly defined make_instru function for keeping a history of steps.",
"twelve_queens_csp = NQueensCSP(12)\nbacktracking_instru_queen = make_instru(twelve_queens_csp)\nresult = backtracking_search(backtracking_instru_queen)\n\nbacktrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets",
"Now finally we set some matplotlib parameters to adjust how our plot will look like. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe how the queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds of upto one second for each time step.",
"matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)\nmatplotlib.rcParams['font.family'].append(u'Dejavu Sans')\n\niteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assignment_history)-1, step=0, value=0)\nw=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)\ndisplay(w)\n\nvisualize_callback = make_visualize(iteration_slider)\n\nvisualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\ntime_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n\na = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\ndisplay(a)",
"Now let us finally repeat the above steps for min_conflicts solution.",
"conflicts_instru_queen = make_instru(twelve_queens_csp)\nresult = min_conflicts(conflicts_instru_queen)\n\nconflicts_step = make_plot_board_step_function(conflicts_instru_queen)",
"This visualization has same features as the one above; however, this one also highlights the conflicts by labeling the conflicted queens with a red background.",
"iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assignment_history)-1, step=0, value=0)\nw=widgets.interactive(conflicts_step,iteration=iteration_slider)\ndisplay(w)\n\nvisualize_callback = make_visualize(iteration_slider)\n\nvisualize_button = widgets.ToggleButton(description = \"Visualize\", value = False)\ntime_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n\na = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\ndisplay(a)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AstroHackWeek/AstroHackWeek2016 | notebook-tutorial/notebooks/01-Tips-and-tricks.ipynb | mit | [
"Best practices\nLet's start with pep8 (https://www.python.org/dev/peps/pep-0008/)\n\nImports should be grouped in the following order:\n\nstandard library imports\nrelated third party imports\nlocal application/library specific imports\n\nYou should put a blank line between each group of imports.\nPut any relevant all specification after the imports.",
"%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%config InlineBackend.figure_format='retina' \n\n# Add this to python2 code to make life easier\nfrom __future__ import absolute_import, division, print_function\n\nfrom itertools import combinations\nimport string\n\nfrom IPython.display import IFrame, HTML, YouTubeVideo\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\nfrom matplotlib.pyplot import GridSpec\nimport seaborn as sns\nimport mpld3\nimport numpy as np\nimport pandas as pd\nimport os, sys\nimport warnings\n\nsns.set();\nplt.rcParams['figure.figsize'] = (12, 8)\nsns.set_style(\"darkgrid\")\nsns.set_context(\"poster\", font_scale=1.3)",
"Pivot Tables w/ pandas\nhttp://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/",
"YouTubeVideo(\"ZbrRrXiWBKc\", width=400, height=300)\n\n!conda install pivottablejs -y\n\ndf = pd.read_csv(\"../data/mps.csv\", encoding=\"ISO-8859-1\")\n\ndf.head(10)\n\nfrom pivottablejs import pivot_ui",
"Enhanced Pandas Dataframe Display",
"pivot_ui(df)\n# Province, Party, Average, Age, Heatmap",
"Keyboard shortcuts\nFor help, ESC + h",
"# in select mode, shift j/k (to select multiple cells at once)\n# split cell with ctrl shift -\n\nfirst = 1\n\nsecond = 2\n\nthird = 3",
"You can also get syntax highlighting if you tell it the language that you're including: \n```bash\nmkdir toc\ncd toc\nwget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.js\nwget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.css\ncd ..\njupyter-nbextension install --user toc\njupyter-nbextension enable toc/toc\n```\nSQL\nSELECT *\nFROM tablename",
"%%bash\npwd \nfor i in *.ipynb\ndo\n wc $i\ndone\necho \necho \"break\"\necho\ndu -h *ipynb\n\ndef silly_absolute_value_function(xval):\n \"\"\"Takes a value and returns the value.\"\"\"\n xval_sq = xval ** 2.0\n xval_abs = np.sqrt(xval_sq)\n return xval_abs\n\nsilly_absolute_value_function?\n\nsilly_absolute_value_function??\n\n# shift-tab\nsilly_absolute_value_function()\n\n# shift-tab-tab\nsilly_absolute_value_function()\n\n# shift-tab-tab-tab\nsilly_absolute_value_function()\n\nimport numpy as np\n\nnp.sin??",
"Stop here for now\nR\n\npyRserve\nrpy2",
"import numpy as np\n\n# !conda install -c r rpy2 -y\n\nimport rpy2\n\n%load_ext rpy2.ipython\n\nX = np.array([0,1,2,3,4])\nY = np.array([3,5,4,6,7])\n\n%%R?\n\n%%R -i X,Y -o XYcoef\nXYlm = lm(Y~X)\nXYcoef = coef(XYlm)\nprint(summary(XYlm))\npar(mfrow=c(2,2))\nplot(XYlm)\n\nXYcoef"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
BrownDwarf/ApJdataFrames | notebooks/Hernandez2014.ipynb | mit | [
"ApJdataFrames Hernandez2014\nTitle: A SPECTROSCOPIC CENSUS IN YOUNG STELLAR REGIONS: THE σ ORIONIS CLUSTER\nAuthors: Jesus Hernandez, Nuria Calvet, Alice Perez, Cesar Briceno, Lorenzo Olguin, Maria E Contreras, Lee Hartmann, Lori E Allen, Catherine Espaillat, and Ramírez Hernan \nData is from this paper:\nhttp://iopscience.iop.org/0004-637X/794/1/36/article",
"%pylab inline\n\nimport seaborn as sns\nsns.set_context(\"notebook\", font_scale=1.5)\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nfrom astropy.io import ascii",
"Table 4 - Low Resolution Analysis",
"tbl4 = ascii.read(\"http://iopscience.iop.org/0004-637X/794/1/36/suppdata/apj500669t4_mrt.txt\")\n\ntbl4[0:4]\n\nNa_mask = ((tbl4[\"f_EWNaI\"] == \"Y\") | (tbl4[\"f_EWNaI\"] == \"N\"))\nprint \"There are {} sources with Na I line detections out of {} sources in the catalog\".format(Na_mask.sum(), len(tbl4))\n\ntbl4_late = tbl4[['Name', '2MASS', 'SpType', 'e_SpType','EWHa', 'f_EWHa', 'EWNaI', 'e_EWNaI', 'f_EWNaI']][Na_mask]\n\ntbl4_late.pprint(max_lines=100, )",
"Meh... not a lot of late type sources... M5.5 is the latest. Oh well.\nScript finished."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/sdk/sdk_automl_image_object_detection_online.ipynb | apache-2.0 | [
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex SDK: AutoML training image object detection model for online prediction\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_image_object_detection_online.ipynb\">\n Open in Google Cloud Notebooks\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex SDK to create image object detection models and do online prediction using a Google Cloud AutoML model.\nDataset\nThe dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.\nObjective\nIn this tutorial, you create an AutoML image object detection model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nView the model evaluation.\nDeploy the Model resource to a serving Endpoint resource.\nMake a prediction.\nUndeploy the Model.\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nSet up your local development environment\nIf you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.\nOtherwise, make sure your environment meets this notebook's requirements. You need the following:\n\nThe Cloud Storage SDK\nGit\nPython 3\nvirtualenv\nJupyter notebook running in a virtual environment with Python 3\n\nThe Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:\n\n\nInstall and initialize the SDK.\n\n\nInstall Python 3.\n\n\nInstall virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.\n\n\nTo install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.\n\n\nTo launch Jupyter, run jupyter notebook on the command-line in a terminal shell.\n\n\nOpen this notebook in the Jupyter Notebook Dashboard.\n\n\nInstallation\nInstall the latest version of Vertex SDK for Python.",
"import os\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG",
"Install the latest GA version of google-cloud-storage library as well.",
"! pip3 install -U google-cloud-storage $USER_FLAG\n\nif os.environ[\"IS_TESTING\"]:\n ! pip3 install --upgrade tensorflow $USER_FLAG",
"Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.",
"import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU runtime\nThis tutorial does not require a GPU runtime.\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.",
"BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.",
"! gsutil mb -l $REGION $BUCKET_NAME",
"Finally, validate access to your Cloud Storage bucket by examining its contents:",
"! gsutil ls -al $BUCKET_NAME",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants",
"import google.cloud.aiplatform as aip",
"Initialize Vertex SDK for Python\nInitialize the Vertex SDK for Python for your project and corresponding bucket.",
"aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)",
"Tutorial\nNow you are ready to start creating your own AutoML image object detection model.\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.",
"IMPORT_FILE = \"gs://cloud-samples-data/vision/salads.csv\"",
"Quick peek at your data\nThis tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.",
"if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head",
"Create the Dataset\nNext, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:\n\ndisplay_name: The human readable name for the Dataset resource.\ngcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.\nimport_schema_uri: The data labeling schema for the data items.\n\nThis operation may take several minutes.",
"dataset = aip.ImageDataset.create(\n display_name=\"Salads\" + \"_\" + TIMESTAMP,\n gcs_source=[IMPORT_FILE],\n import_schema_uri=aip.schema.dataset.ioformat.image.bounding_box,\n)\n\nprint(dataset.resource_name)",
"Create and run training pipeline\nTo train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.\nCreate training pipeline\nAn AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:\n\ndisplay_name: The human readable name for the TrainingJob resource.\nprediction_type: The type task to train the model for.\nclassification: An image classification model.\nobject_detection: An image object detection model.\nmulti_label: If a classification task, whether single (False) or multi-labeled (True).\nmodel_type: The type of model for deployment.\nCLOUD: Deployment on Google Cloud\nCLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.\nCLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.\nMOBILE_TF_VERSATILE_1: Deployment on an edge device.\nMOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.\nMOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.\nbase_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.\n\nThe instantiated object is the DAG (directed acyclic graph) for the training job.",
"dag = aip.AutoMLImageTrainingJob(\n display_name=\"salads_\" + TIMESTAMP,\n prediction_type=\"object_detection\",\n multi_label=False,\n model_type=\"CLOUD\",\n base_model=None,\n)\n\nprint(dag)",
"Run the training pipeline\nNext, you run the DAG to start the training job by invoking the method run, with the following parameters:\n\ndataset: The Dataset resource to train the model.\nmodel_display_name: The human readable name for the trained model.\ntraining_fraction_split: The percentage of the dataset to use for training.\ntest_fraction_split: The percentage of the dataset to use for test (holdout data).\nvalidation_fraction_split: The percentage of the dataset to use for validation.\nbudget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).\ndisable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.\n\nThe run method when completed returns the Model resource.\nThe execution of the training pipeline will take upto 60 minutes.",
"model = dag.run(\n dataset=dataset,\n model_display_name=\"salads_\" + TIMESTAMP,\n training_fraction_split=0.8,\n validation_fraction_split=0.1,\n test_fraction_split=0.1,\n budget_milli_node_hours=20000,\n disable_early_stopping=False,\n)",
"Review model evaluation scores\nAfter your model has finished training, you can review the evaluation scores for it.\nFirst, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.",
"# Get model resource ID\nmodels = aip.Model.list(filter=\"display_name=salads_\" + TIMESTAMP)\n\n# Get a reference to the Model Service client\nclient_options = {\"api_endpoint\": f\"{REGION}-aiplatform.googleapis.com\"}\nmodel_service_client = aip.gapic.ModelServiceClient(client_options=client_options)\n\nmodel_evaluations = model_service_client.list_model_evaluations(\n parent=models[0].resource_name\n)\nmodel_evaluation = list(model_evaluations)[0]\nprint(model_evaluation)",
"Deploy the model\nNext, deploy your model for online prediction. To deploy the model, you invoke the deploy method.",
"endpoint = model.deploy()",
"Send a online prediction request\nSend a online prediction to your deployed model.\nGet test item\nYou will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.",
"test_items = !gsutil cat $IMPORT_FILE | head -n1\ncols = str(test_items[0]).split(\",\")\nif len(cols) == 11:\n test_item = str(cols[1])\n test_label = str(cols[2])\nelse:\n test_item = str(cols[0])\n test_label = str(cols[1])\n\nprint(test_item, test_label)",
"Make the prediction\nNow that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource.\nRequest\nSince in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.\nThe format of each instance is:\n{ 'content': { 'b64': base64_encoded_bytes } }\n\nSince the predict() method can take multiple items (instances), send your single test item as a list of one test item.\nResponse\nThe response from the predict() call is a Python dictionary with the following entries:\n\nids: The internal assigned unique identifiers for each prediction request.\ndisplayNames: The class names for each class label.\nconfidences: The predicted confidence, between 0 and 1, per class label.\nbboxes: The bounding box of each detected object.\ndeployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.",
"import base64\n\nimport tensorflow as tf\n\nwith tf.io.gfile.GFile(test_item, \"rb\") as f:\n content = f.read()\n\n# The format of each instance should conform to the deployed model's prediction input schema.\ninstances = [{\"content\": base64.b64encode(content).decode(\"utf-8\")}]\n\nprediction = endpoint.predict(instances=instances)\n\nprint(prediction)",
"Undeploy the model\nWhen you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.",
"endpoint.undeploy_all()",
"Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nAutoML Training Job\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket",
"delete_all = True\n\nif delete_all:\n # Delete the dataset using the Vertex dataset object\n try:\n if \"dataset\" in globals():\n dataset.delete()\n except Exception as e:\n print(e)\n\n # Delete the model using the Vertex model object\n try:\n if \"model\" in globals():\n model.delete()\n except Exception as e:\n print(e)\n\n # Delete the endpoint using the Vertex endpoint object\n try:\n if \"endpoint\" in globals():\n endpoint.delete()\n except Exception as e:\n print(e)\n\n # Delete the AutoML or Pipeline trainig job\n try:\n if \"dag\" in globals():\n dag.delete()\n except Exception as e:\n print(e)\n\n # Delete the custom trainig job\n try:\n if \"job\" in globals():\n job.delete()\n except Exception as e:\n print(e)\n\n # Delete the batch prediction job using the Vertex batch prediction object\n try:\n if \"batch_predict_job\" in globals():\n batch_predict_job.delete()\n except Exception as e:\n print(e)\n\n # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object\n try:\n if \"hpt_job\" in globals():\n hpt_job.delete()\n except Exception as e:\n print(e)\n\n if \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
musketeer191/job_analytics | .ipynb_checkpoints/jobtitle_skill-checkpoint.ipynb | gpl-3.0 | [
"Building JobTitle-Skill matrix\nRunning LDA on document-skill matrix, where each document is a job post, still does not give good results!!! What is the problem here?\nIt seems that the job post level has too many noises:\n+ other info not relating to skills i.e. salary, location, working time, required experience.\nThus, we now try putting all posts of the same job title together so that the aggregated skill info can win over the noises.",
"import cluster_skill_helpers as cluster_skill_helpers\n\nfrom cluster_skill_helpers import *\n\nHOME_DIR = 'd:/larc_projects/job_analytics/'; DATA_DIR = HOME_DIR + 'data/clean/'\nSKILL_DIR = DATA_DIR + 'skill_cluster/'; RES_DIR = HOME_DIR + 'results/reports/skill_cluster/'\n\njobs = pd.read_csv(DATA_DIR + 'jobs.csv')\n\nskill_df = pd.read_csv(SKILL_DIR + 'skill_df.csv')",
"Collapse all posts of the same job title into a single document",
"by_job_title = jobs.groupby('title')\njob_title_df = by_job_title.agg({'job_id': lambda x: ','.join(x), 'doc': lambda x: 'next_doc'.join(x)})\n\njob_title_df = job_title_df.add_prefix('agg_').job_title_dfet_index()\njob_title_df.head()\n\nn_job_title = by_job_title.ngroups\nprint('# job titles: %d' %n_job_title)\n\nreload(cluster_skill_helpers)\nfrom cluster_skill_helpers import *\n\njd_docs = job_title_df['agg_doc']\n\n# This version of skills still contain stopwords\ndoc_skill = buildDocSkillMat(jd_docs, skill_df)",
"Concat matrices doc_unigram, doc_bigram and doc_trigram to get occurrences of all skills:",
"from scipy.sparse import hstack\njobtitle_skill = hstack([doc_unigram, doc_bigram, doc_trigram])\n\nwith(open(SKILL_DIR + 'jobtitle_skill.mtx', 'w')) as f:\n mmwrite(f, jobtitle_skill)\n\njobtitle_skill.shape\n\njobtitle_skill = jobtitle_skill.toarray()",
"Most popular skills by job title",
"job_title_df.head(1)\n\nidx_of_top_skill = np.apply_along_axis(np.argmax, 1, jobtitle_skill)\n\n# skill_df = skills\nskills = skill_df['skill']\ntop_skill_by_job_title = pd.DataFrame({'job_title': job_titles, 'idx_of_top_skill': idx_of_top_skill})\ntop_skill_by_job_title['top_skill'] = top_skill_by_job_title['idx_of_top_skill'].apply(lambda i: skills[i])\n\ntop_skill_by_job_title.head(30)\n\nwith(open(SKILL_DIR + 'jobtitle_skill.mtx', 'r')) as f:\n jobtitle_skill = mmread(f)\n\njobtitle_skill = jobtitle_skill.tocsr()\njobtitle_skill.shape\n\njob_titles = job_title_df['title']\n\n# for each row (corresponding to a jobtitle) in matrix jobtitle_skill, get non-zero freqs\nglobal k\nk = 3\n\ndef getTopK_Skills(idx):\n title = job_titles[idx]\n print('Finding top-{} skills of job title {}...'.format(k, title))\n \n skill_occur = jobtitle_skill.getrow(idx)\n tmp = find(skill_occur)\n nz_indices = tmp[1]\n values = tmp[2]\n res = pd.DataFrame({'job_title': title, 'skill_found_in_jd': skills[nz_indices], 'occur_freq': values})\n\n res.sort_values('occur_freq', ascending=False, inplace=True)\n return res.head(k)\n\n# getTopK_Skills(0)\n\nframes = map(getTopK_Skills, range(n_job_title))\nres = pd.concat(frames) # concat() is great as it can concat as many df as possible\nres.head(30)\n\nres.to_csv(RES_DIR + 'top3_skill_by_jobtitle.csv', index=False)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
halexand/NB_Distribution | .ipynb_checkpoints/KL rambling notes on Python-checkpoint.ipynb | mit | [
"Use this to keep track of useful code bits as I learn Python\nKrista, August 19, 2015\nShortcut Action\nShift-Enter run cell\nCtrl-Enter run cell in-place\nAlt-Enter run cell, insert below\n\nCtrl / (Ctrl and then the slash)...will comment out any selected text within a block of code",
"#First up...list the files in a directory\nimport os,sys\nos.listdir(os.getcwd())\n\n#read the CSV file into a data frame and use the pandas head tool to show me the first five rows. \n#note that this doesn't seem to work: pd.head(CO_RawData)\nCO_RawData=pd.read_csv(mtabFile, index_col='RInumber')\nCO_RawData.head(n=5)\n\n#insert an image...the gif file here would be in the folder\nfrom IPython.display import Image\nImage(url=\"R02485.gif\")\n\nfor x in range(0, 3):\n print(\"hello\")\n\nfig.suptitle(CO + ' working') #use the plus sign to concatenate strings for the title\n\nfrom IPython.core.debugger import Tracer #used this to step into the function and debug it, also need line with Tracer()() \nfor i, CO in enumerate(CO_withKO):\n #if i==2:\n #break\n kos=CO_withKO[CO]['Related KO']\n cos=CO_withKO[CO]['Related CO']\n for k in kos: \n if k in KO_RawData.index: \n kData=KO_RawData.loc[kos].dropna()\n kData=(kData.T/kData.sum(axis=1)).T\n cData=CO_RawData.loc[cos].dropna()\n cData=(cData.T/cData.sum(axis=1)).T\n \n fig, ax=plt.subplots(1)\n kData.T.plot(color='r', ax=ax)\n cData.T.plot(color='k', ax=ax)\n \n Tracer()()\n \n getKmeans = CcoClust.loc['C01909']['kmeans']\n makeStringLabel = CO + '_kmeansCluster_' + str(getKmeans)\n #fig.suptitle(CO)\n fig.suptitle(makeStringLabel)\n \n #fig.savefig(CO+'.png') #stop saving all the images for now...\n break\n\n#here, tData is a pandas data frame that I want to plot into a bar graph\n#tData.plot(kind = \"bar\") ##this would be the code to run if tData existed...\n#instead I am reading in the file saved and present in my working directory using this:\nfrom IPython.display import Image\nImage(filename=\"SampleBarGraph.png\")\n\n#indexing in Python is a bit bizarre, or at least takes some getting used to.\n# df.ix[0,'cNumber'] #this will allow me to mix index from integers with index by label\n#other way apparently uses iloc and loc, to use integers and labels respectively\n# this would be df.iloc[0].loc['cNumber] {can't get that to work in the if statement}\n\n#ways to subset data...\nCcoClust.loc['C05356']['kmeans']\ntData = CcoClust.loc['C05356']\ntype(tData)\n\n#want to select only the first group in the kmeans clusters \n#(baby steps, eventually do this for each cluster)\nCcoClust[CcoClust.kmeans==1]",
"/...this is where I learned to not use pip install with scikit-learn...\nTo upgrade scikit-learn:\nconda update scikit-learn",
"import sklearn.cluster\n#from sklearn.cluster import KMeans\n\nsilAverage = [0.4227, 0.33299, 0.354, 0.3768, 0.3362, 0.3014, 0.3041, 0.307, 0.313, 0.325,\n0.3109, 0.2999, 0.293, 0.289, 0.2938, 0.29, 0.288, 0.3, 0.287]\n\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"OK...can I get a simple scatter plot?",
"plt.scatter(range(0,len(silAverage)), silAverage)\nplt.grid() #put on a grid\n\nplt.xlim(-1,20)\n\n#get list of column names in pandas data frame\nlist(my_dataframe.columns.values)\n\nfor i in range(0,len(ut)):\n if i == 10:\n break\n p = ut.iloc[i,:]\n n = p.name\n if n[0] == 'R':\n #do the plotting, \n #print 'yes'\n CO = p.KEGG\n kos = CO_withKO[CO]['Related KO']\n cos = CO_withKO[CO]['Related CO']\n #Tracer()()\n for k in kos: \n if k in KO_RawData.index: \n kData=KO_RawData.loc[kos].dropna()\n kData=(kData.T/kData.sum(axis=1)).T\n #? why RawData, the output from the K-means will have the normalized data, use that for CO \n #bc easier since that is the file I am working with right now.\n #cData=CO_RawData.loc[cos].dropna()\n #cData=(cData.T/cData.sum(axis=1)).T\n cData = pd.DataFrame(p[dayList]).T\n \n #go back and check, but I think this next step is already done\n #cData=(cData.T/cData.sum(axis=1)).T\n\n fig, ax=plt.subplots(1)\n kData.T.plot(color='r', ax=ax)\n cData.T.plot(color='k', ax=ax)\n \n else:\n #skip over the KO plotting, so effectively doing nothing\n #print 'no'",
"Write a function to match RI number and cNumbers",
"def findRInumber(dataIn,KEGGin):\n #find possible RI numbers for a given KEGG number. \n for i,KEGG in enumerate(dataIn['KEGG']):\n if KEGG == KEGGin:\n t = dataIn.index[i]\n print t\n\n#For example: this will give back one row, C18028 will be multiple\nm = findRInumber(forRelatedness,'C00031') \nm\n\n#to copy a matrix I would think this works: NOPE\n#forRelatedness = CcoClust# this is NOT making a new copy...\n#instead it makes a new pointing to an existing data frame. So you now have two ways to \n#reference the same data frame. Make a change with one term and you can see the same change\n#using the other name. Odd. No idea why you would want that.\n\n\n##this is the test that finally let me understand enumerate\n\n# for index, KEGG in enumerate(useSmall['KEGG']):\n# print index,KEGG\n\n# Windows\nchrome_path = 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s'\n\nurl = \"http://www.genome.jp/dbget-bin/www_bget?cpd:C00019\"\nwebbrowser.get(chrome_path).open_new(url)\n#while a nice idea, this stays open until you close the web browser window.\n\nfrom IPython.display import HTML\ntList = ['C02265','C00001']\nfor i in tList:\n ml = '<iframe src = http://www.genome.jp/dbget-bin/www_bget?cpd:' + i + ' width=700 height=350></iframe>'\n print ml\n\nfrom IPython.display import HTML\nCO='C02265'\nHTML('<iframe src = http://www.genome.jp/dbget-bin/www_bget?cpd:' + CO + ' width=700 height=350></iframe>')\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
endgameinc/youarespecial | BSidesLV -- your model isn't that special -- (1) MLP.ipynb | mit | [
"Preliminaries\nWe're going to build and compare a few malware machine learning models in this series of Jupyter notebooks. Some of them require a GPU. I've used a Titan X GPU for this exercise. If yours isn't as beefy, you may get tensorflow memory errors that may require modifying some of the code, namely file_chunks and file_chunk_size. (I'll point to it later.) But, to get started, the first few exercises will work on even that GPU you're embarrassed to tell people about, or if you're willing to wait, no GPU at all.\nFor the fancy folks who have multiple GPUs, we're going to restrict usage to the first one.",
"%env CUDA_VISIBLE_DEVICES=0 # limit GPU usage, if any to this GPU",
"Also note that this exercise assumes you've already populated a malicious/ and a benign/ directory with samples that you consider malicious and benign, respectively. How many samples? In this notebook, I'm using 50K of each for demonstration purposes. Sadly, you must bring your own. If you don't populate these subdirectories for binaries (each renamed to the sha256 hash of its contents!), the code will bicker and complain incessently.\nFeature extraction for feature-based models\nThere is a lot of domain knowledge on what malware authors can do, and what malware authors actually do when crafting malicious files. Furthermore, there are some things malware authors seldom do that would indicate that a file is benign. For each file we want to analyze, we're going to encapsulate that domain knowledge about malicious and benign files in a single feature vector. See the source code at classifier/pefeatures.py.\nNote that the feature extraction we use here contains many elements from published malware classification papers. Some of those are slightly modified. And there are additional features in this particular feature extraction that are included because, well, they were just sitting there in the LIEF parser patiently waiting for a chair at the feature vector table. Read: there's really no secret sauce in there, and to turn this into something commercially viable would take a bit of work. But, be my guest.\nA note about LIEF. What a cool tool with a great mission! It aims to parse and manipulate binary files for Windows (PE), Linux (ELF) and MacOS (macho). Of course, we're using only the PE subset here. At the time of this writing, LIEF is still very much a new tool, and I've worked with the authors to help resolve some kinks. It's a growing project with more warts to find and fix. Nevertheless, we're using it as the backbone for features that requires one to parse a PE file.",
"from classifier import common\n\n# this will take a LONG time the first time you run it (and cache features to disk for next time)\n# it's also chatty. Parts of feature extraction require LIEF, and LIEF is quite chatty.\n# the output you see below is *after* I've already run feature extraction, so that\n# X and sample_index are being read from cache on disk\nX, y, sha256list = common.extract_features_and_persist() \n\n# split our features, labels and hashes into training and test sets\nfrom sklearn.model_selection import train_test_split\nimport numpy as np\nnp.random.seed(123)\nX_train, X_test, y_train, y_test, sha256_train, sha256_test = train_test_split( X, y, sha256list, test_size=1000) \n# a random train_test split, but for a malware classifier, we should really be holding out *future* malicious and benign \n# samples, to better capture how we'll generalize to malware yet to be seen in the wild. ...an exercise left to the reader..",
"Multilayer perceptron\nWe'll use the features we extracted to train a multilayer perceptron (MLP). An MLP is an artificial neural network with at least one hidden layer. Is a multilayer perceptron \"deep learning\"? Well, it's a matter of semantics, but \"deep learning\" may imply that the features and model are optimized together, end-to-end. So, it that sense, no: since we're using domain knowledge to extract features, then pass it to an artificial neural network, we'll remain conservative and call this an MLP. (As we'll see, don't get fooled just because we're not calling this \"deep learning\": this MLP is no slouch.) The network architecture is defined in classifier/simple_multilayer.py.",
"# StandardScaling the data can be important to multilayer perceptron\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler().fit(X_train)\n\n# Note that we're using scaling info form X_train to transform both\nX_train = scaler.transform(X_train) # scale for multilayer perceptron\nX_test = scaler.transform(X_test)\n\nfrom classifier import simple_multilayer\nfrom keras.callbacks import LearningRateScheduler, EarlyStopping, ReduceLROnPlateau, ModelCheckpoint\nmodel = simple_multilayer.create_model(\n input_shape=(X_train.shape[1], ), # input dimensions\n input_dropout=0.05, # this prevents the model becoming a fanboy of (overfitting to) any particular input feature\n hidden_dropout=0.1, # same, but for hidden units. Dropping out hidden layers can create a sort of ensemble learner\n hidden_layers=[4096, 2048, 1024, 512] # this is \"art\". making up # of hidden layers and width of each. don't be afraid to change this\n)\nmodel.fit(X_train, y_train,\n batch_size=128,\n epochs=200,\n verbose=1,\n callbacks=[\n EarlyStopping( patience=20 ),\n ModelCheckpoint( 'multilayer.h5', save_best_only=True),\n ReduceLROnPlateau( patience=5, verbose=1)],\n validation_data=(X_test, y_test))\n\nfrom keras.models import load_model\n# we'll load the \"best\" model (in this case, the penultimate model) that was saved \n# by our ModelCheckPoint callback\nmodel = load_model('multilayer.h5')\n\ny_pred = model.predict(X_test)\ncommon.summarize_performance(y_pred, y_test, \"Multilayer perceptron\") \n# The astute reader will note we should be doing this on a separate holdout, since we've explicitly\n# saved the model that works best on X_test, y_test...an exercise for left for the reader...\n",
"Sanity check: random forest classifier\nAlright. Is that good? Let's compare to another model. We'll reach for the simple and reliable random forest classifier?\nOne nice thing about tree-based classifiers like a random forest classifier is that they are invariant to linear scaling and shifting of the dataset (the model will automatically learn those transformations). Nevertheless, for a sanity check, we're going to use the scaled/transformed features in a random forest classifier.",
"from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier\n# you can increase performance by increasing n_estimators, and removing the restriction on max_depth\n# I've kept those in there because I want a quick-and-dirty look at how the MLP above\nrf = RandomForestClassifier( \n n_estimators=40, \n n_jobs=-1, \n max_depth=30\n).fit(X_train, y_train)\n\ny_pred = rf.predict_proba(X_test)[:,-1] # get probabiltiy of malicious (last class == last column )\n_ = common.summarize_performance(y_pred, y_test, \"RF Classifier\")",
"How can we improve?\nReally, it's not a terrible model, but it's nothing special. But, we'd really like to get to the realm of > 99% true positive rate at < 1% false positive rate.\nSeems like we can do one of two things here:\n1. Spend some time working on our dataset, our labels, and our feature extraction, but use the same model.\n2. Make our model special. Really special.\nHey, end-to-end deep learning disrupted object detection, image recognition, speech recognition and machine translation. And that sounds way more interesting than item 1, so let's pull out some end-to-end deep learning for static malware detection!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mmaelicke/felis_python1 | felis_python1/lectures/06_Classes.ipynb | mit | [
"Classes\nOne of the main features in the Python programming language is its object oriented structure. Thus, beside procedual programming (scripting) it's also possible to use Python for object oriented Programming (OOP). \nIn a nutshell, everything in Python is an object and can be understood as an instance of a specific class. Therefore, a class is like a blueprint of how an object is structured and how it should behave. With that in mind, learning to write your own custom classes means implementing more or less any functionality into Python that you can think of. Nowadays, there is hardly anything in the field of information science, that is not implemented in Python.\nLet's have a look at some of the objects.",
"f = open('afile.txt', 'w')\nprint(f)\nprint(f.__class__)\nprint(type(f))\nprint(f.readline)\nf.close()",
"The file object is already implemented in Python, just like thousands of other classes, therefore we do not have to bother with reading and writing files in Pthon. Therefore, let's have a look at defining our own classes. \nA class can be defined using the <span style=\"color: green\">class</span> statement followed by a class name. This is very similar to <span style=\"color: green\">def</span>. Everything inside the class namespace is now part of that class. The shortest possible class does now define nothing inside the namespace (and will therefore have no attributes and no functionality). Nevertheless, it can be instantiated and a reference to the class instance can be assigned to a variable.",
"# define class\nclass Car:\n pass\n\n# create two instances\nvw = Car()\naudi= Car()\n\nprint('vw: ', type(vw), 'audi: ', type(audi))\nprint('vw: ', vw.__class__, 'audi: ', audi.__class__)\nprint('vw: ', str(vw), 'audi: ', str(audi))",
"Methods\nThe shown class <span style='color: blue'>Car</span> is not really useful. But we can define functions inside the class namespace. These functions are called methods. To be correct here: they are called instance methods and should not be confused with class methods, which will not be covered here. \nAlthough, we did not define methods so far, there are already some methods assigned to <span style='color: blue'>Car</span>, which Python created for us. These very generic methods handle the return of the <span style=\"color: green\">type</span> or <span style=\"color: green\">str</span> function if invoked on a <span style='color: blue'>Car</span> instance. \nWe will first focus on a special method, the __init__. This method is already defined, but doesn't do anything. But we can do that and fill the method. It will be called on object instantiation. This way we can set default values and define what a <span style='color: blue'>Car</span> instance should look like after creation. \nLet's define an actual speed and maximum speed for our car, because this is what a car needs.",
"# redefine class\nclass Car:\n def __init__(self):\n self.speed = 0\n self.max_speed = 100\n\n# create two instances\nvw = Car()\naudi = Car()\nprint('vw: speed: %d max speed: %d' % (vw.speed, vw.max_speed))\nprint('audi: speed: %d max speed: %d' % (audi.speed, audi.max_speed))\n\naudi.max_speed = 250\naudi.speed = 260\nvw.speed = - 50.4\n\nprint('vw: speed: %d max speed: %d' % (vw.speed, vw.max_speed))\nprint('audi: speed: %d max speed: %d' % (audi.speed, audi.max_speed))",
"This is better, but still somehow wrong. A car should not be allowed to drive faster than the maximum possible speed. A Volkswagen might not be the best car in the world, but it can do definitely better than negative speeds. A better approach would be to define some methods for accelerating and decelerating the car.<br>\nDefine two methods accelerate and decelerate that accept a value and set the new speed for the car. Prevent the car from negative speeds and stick to the maximum speed.",
"# redefine class\nclass Car:\n pass \n \nvw = Car()\nprint(vw.speed)\nvw.accelerate(60)\nprint(vw.speed)\nvw.accelerate(45)\nprint(vw.speed)\nvw.decelerate(10)\nprint(vw.speed)\nvw.decelerate(2000)\nprint(vw.speed)",
"Magic Methods\nMaybe you recognized the two underscores in the __init__ method. A defined set of function names following this name pattern are called magic methods in Python, because they are influcencing the object behaviour using magic. Beside __init__ two other very important magic methods are __repr__ and __str__. <br>\nThe return value of __str__ defines the string representation of the object instance. This way you can define the return value whenever <span style=\"color: green\">str</span> is called on an object instance. The __repr__ method is very similar, but returns the object representation. Whenever possible, the object shall be recoverable from this returned string. However, with most custom classes this is not easily possible and __repr__ shall return a one line string that clearly identifies the object instance. This is really useful for debugging your code.",
"print('str(vw) old:' , str(vw))\n\nclass Car:\n pass\n \n\nvw = Car()\nvw.accelerate(45)\nprint('str(vw) new:', str(vw))",
"Using these functions, almost any behaviour of the <span style='color: blue'>Car</span> instance can be influenced. \nImagine you are using it in a conditional statement and test two instances for equality or if one instance is bigger than the other one.<br>\n\nAre these two variables equal if they reference exactly the same instance?\nAre they equal in case they are of the same model\nIs one instance bigger in case it's actually faster?\nor has the higher maximum speed?\n\nLet's define a new attribute model, which is requested by __init__ as an argument. Then the magic method __eq__ can be used to check the models of the two instances.<br>\nThe __eq__ method can be defined like: __eq__(self, other) and return either <span style='color: green'>True</span> or <span style='color: green'>False</span>.",
"class Car:\n pass\n\nvw = Car('vw')\nvw2 = Car('vw')\naudi = Car('audi')\n\nprint('vw equals vw2? ',vw == vw2)\nprint('vw equals vw? ',vw == vw)\nprint('vw equals audi? ', vw == audi)\nprint('is vw exactly 9? ', vw == 9)",
"private methods and attributes\nThe <span style='color: blue'>Car</span> class has two methods which are meant to be used for mainpulating the actual speed. Nevertheless, one could directly assign new values, even of other types than integers, to the speed and max_speed attribute. Thus, one would call these attributes public attributes, just like accelerate and decelerate are public methods. This implies to other developers, 'It's ok to directly use these attributes and methods, that's why I putted them there.'",
"vw = Car('audi')\nprint('Speed: ', vw.speed)\nvw.speed = 900\nprint('Speed: ', vw.speed)\nvw.speed = -11023048282\nprint('Speed: ', vw.speed)\nvw.speed = Car('vw')\nprint('Speed: ', vw.speed)",
"Consequently, we want to protect this attribute from access from outside the class itself. Other languages use the keyword <span style=\"color: blue\">private</span> to achieve this. Here, Python is not very explicit, as it does not define a keyword or statement for this. You'll have to prefix your attribute or method name with double underscores. Renaming Car.speed to Car.__speed will therefore not work like shown above.\nAs the user or other developers cannot access the speed anymore, we have to offer a new interface for accessing this attribute. We could either define a method getSpeed returning the actual speed or implement a so called property. This will be introduced in a later shown example.<br>\nNote: Some jupyter notebooks allow accessing a protected attribute, but your Python console won't allow this.",
"class Car:\n pass\n\n\nvw = Car('vw')\nvw.accelerate(45)\nprint(vw)\nvw.decelerate(20)\nprint(vw)\nprint(vw.getSpeed())",
"class attributes\nAll attributes and methods defined so far have one thing in common. They are bound to the instance. That means you can only access or invoke them using a reference to this instance. In most cases this is exactly what you want and would expect, as altering one instance won't influence other class instances. But in some cases this is exactly the desired behaviour. A typical example is counting object instances. For our <span style='color: blue'>Car</span> class this would mean an attribute storing the current amount of instanciated cars. It is not possible to implement this using instance attibutes and methods. <br>\nOne (bad) solution would be shifting the declaration of <span style='color: blue'>Car</span> from the global namespace to a function returning a new car instance. Then the function could increment a global variable. The downside is, that destroyed car instances won't decrement this global variable. A function like this would, by the way, be called a ClassFactory in the Python world.<br>\nThe second (way better) solution are using a class attribute. These attributes are bound to the class, not an instance of that class. That means all instances will operate on the same variable. In the field of data analysis one would implement a counter like this for example for counting the instances of a class handling large data amounts like a raster image. Then the amount of instances could be limited.",
"class Car:\n pass\n\n\nvw = Car('vw')\nprint(vw.count)\naudi = Car('audi')\nprint(audi.count)\n\n\nbmw = Car('bmw')\nprint('BMW:', bmw.max_speed)\nprint('VW:', vw.max_speed)\nprint('Audi:', audi.max_speed)\nprint(vw.count)",
"Inheritance\nAs a proper OOP language Python does also implement inheritance. This means, that one can define a class which inherits the attibutes and classes from another class. You can put other classes into the parenthesis of your class signature and the new class will inherit from these classes. One would call this new class a child class and the class it inherits from a parent class. Every of that child classes can of course inherit to as many children as needed. Then these children will inherit from its parent and all their parents.<br>\nIn case a method or attribute gets re-defined, the child method or attribute will overwrtie the parent methods and attributes.<br>\nA real world example of this concept is the definition of a class that can read different file formats and transform the content into a inner-application special format. You could then first write a class that can do the transformation. Next, another class is defined inheriting from this base class. This class can now read all text files on a very generic level. From here different class can be defined, each one capable of exactly one specific text-based format, like a CSV or JSON reader. Now, each of these specific classes know all the methods from all prent classes and the transformation does not have to be redefined on each level. The second advantage is, that at a later point of time one could decide to implement a generic database reader as well. Then different database engine specific reader could be defined and again inherit all the transformation stuff.\nHere, we will use this concept to write two inheriting class es VW and Audi, which both just set the model into a protected attribute.<br> How could this concept be extended?",
"class VW(Car):\n def __init__(self):\n super(VW, self).__init__('vw')\n\nclass Audi(Car):\n def __init__(self):\n super(Audi, self).__init__('audi')\n \nvw = VW()\naudi = Audi()\n\nvw.accelerate(40)\naudi.accelerate(400)\nprint(vw)\nprint(audi)\nprint(vw == audi)\nprint(isinstance(vw, VW))\nprint(isinstance(vw, Car))",
"Property\nSometimes it would be really handy if an attribute could be altered or calculated before returning it to the user. Or even better: if one could make a function to behave like an attribute. That's exactly what a property does. These are methods with no other argument than self and therefore be executed without parentheses. Using a property like this enables us to reimplement the speed attribute. We're just using a property.<br>\nThe property function is a built-in function that needs a function as only argument and returns exactly the same function again with the added property behaviour. In information science a function expecting another function, altering it and returing it back for usage are called decorators (a concept borrowed from Java). Decorating functions is in Python even easier as you can just use the decorator operator: @.",
"class MyInt(int):\n def as_string(self):\n return 'The value is %s' % self\n\ni = MyInt(5)\nprint(i.as_string())\n\nclass MyInt(int):\n @property\n def as_string(self):\n return 'The value is %s' % self\n \nx = MyInt(7)\nprint(x.as_string)\n\nclass Car: \n pass\n\nclass VW(Car):\n def __init__(self):\n super(VW, self).__init__('vw')\n \nvw = VW()\nvw.accelerate(60)\nprint(vw.speed)",
"Property.setter\nObviously, the protectec __speed attribute cannot be changed and the speed property is a function and thus, cannot be set. In the example of the Car, this absolutely makes sense, but nevertheless, setting a property is also possible. This time the property function is defined again accepting an additional positional argument. This will be filled by the assigned value. The Decorator for the redefinition is the @property.setter function.",
"class Model(object):\n def __init__(self, name):\n self.__model = self.check_model(name)\n \n def check_model(self, name):\n if name.lower() not in ('vw', 'audi'):\n return 'VW'\n else:\n return name.upper()\n \n @property\n def model(self):\n return self.__model\n \n @model.setter\n def model(self, value):\n self.__model = self.check_model(value)\n \ncar = Model('audi')\nprint(car.model)\ncar.model = 'vw'\nprint(car.model)\ncar.model = 'mercedes'\nprint(car.model)\nsetattr(car, '__model', 'mercedes')\nprint(car.model)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arcyfelix/Courses | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/02-Basic-Algorithm-Methods.ipynb | apache-2.0 | [
"Basic Algorithm Methods\nLet's algorithmically test our earlier optimized tech portfolio strategy with Quantopian!\nTHIS CODE ONLY WORKS ON QUANTOPIAN. EACH CELL CORRESPONDS WITH A PART OF THE VIDEO LECTURE. MAKE SURE TO WATCH THE VIDEOS FOR CLARITY ON THIS!\ninitialize()\ninitialize() is called exactly once when our algorithm starts and requires context as input.\ncontext is an augmented Python dictionary used for maintaining state during our backtest or live trading, and can be referenced in different parts of our algorithm. context should be used instead of global variables in the algorithm. Properties can be accessed using dot notation (context.some_property).\n handle_data() \nhandle_data() is called once at the end of each minute and requires context and data as input. context is a reference to the same dictionary in initialize() and data is an object that stores several API functions.\nOur Tech Stock Optimized Portfolio\nLet's use the tech stock portfolio we calculated earlier. Keep in mind that handle_data() is readjusting our portfolio every minute! That may be unreasonable for certain algorithms, but for this example, we will just continue with these basics functions.",
"def initialize(context):\n # Reference to Tech Stocks\n context.aapl = sid(24)\n context.csco = sid(1900)\n context.amzn = sid(16841)\n\ndef handle_data(context, data):\n # Position our portfolio optimization!\n order_target_percent(context.aapl, .27)\n order_target_percent(context.csco, .20)\n order_target_percent(context.amzn, .53)",
"Grabbing Current Data\ndata.current()\ndata.current() can be used to retrieve the most recent value of a given field(s) for a given asset(s). data.current() requires two arguments: the asset or list of assets, and the field or list of fields being queried. Possible fields include 'price', 'open', 'high', 'low', 'close', and 'volume'. The output type will depend on the input types",
"def initialize(context):\n # Reference to Tech Stocks\n context.techies = [sid(16841),\n sid(24),\n sid(1900)]\n\ndef handle_data(context, data):\n # Position our portfolio optimization!\n tech_close = data.current(context.techies, 'close')\n print(type(tech_close)) # Pandas Series\n print(tech_close) # Closing Prices ",
"Note! You can use data.is_stale(sid(#)) to check if the results of data.current() where generated at the current bar (the timeframe) or were forward filled from a previous time.\nChecking for trading\ndata.can_trade()\ndata.can_trade() is used to determine if an asset(s) is currently listed on a supported exchange and can be ordered. If data.can_trade() returns True for a particular asset in a given minute bar, we are able to place an order for that asset in that minute. This is an important guard to have in our algorithm if we hand-pick the securities that we want to trade. It requires a single argument: an asset or a list of assets.",
"def initialize(context):\n # Reference to amazn\n context.amzn = sid(16841)\n \ndef handle_data(context, data):\n # This insures we don't hit an exception!\n if data.can_trade(sid(16841)):\n order_target_percent(context.amzn, 1.0)",
"Checking Historical Data\nWhen your algorithm calls data.history on equities, the returned data is adjusted for splits, mergers, and dividends as of the current simulation date. In other words, when your algorithm asks for a historical window of prices, and there is a split in the middle of that window, the first part of that window will be adjusted for the split. This adustment is done so that your algorithm can do meaningful calculations using the values in the window.\nThis code queries the last 20 days of price history for a static set of securities. Specifically, this returns the closing daily price for the last 20 days, including the current price for the current day. Equity prices are split- and dividend-adjusted as of the current date in the simulation:",
"\ndef initialize(context):\n # AAPL, MSFT, and SPY\n context.assets = [sid(24), sid(1900), sid(16841)]\n\ndef before_trading_start(context,data):\n price_history = data.history(context.assets,\n fields = \"price\", \n bar_count = 5, \n frequency = \"1d\")\n \n print(price_history)\n",
"The bar_count field specifies the number of days or minutes to include in the pandas DataFrame returned by the history function. This parameter accepts only integer values.\nThe frequency field specifies how often the data is sampled: daily or minutely. Acceptable inputs are ‘1d’ or ‘1m’. For other frequencies, use the pandas resample function.\nExamples\nBelow are examples of code along with explanations of the data returned.\nDaily History\nUse \"1d\" for the frequency. The dataframe returned is always in daily bars. The bars never span more than one trading day. For US equities, a daily bar captures the trade activity during market hours (usually 9:30am-4:00pm ET). For US futures, a daily bar captures the trade activity from 6pm-6pm ET (24 hours). For example, the Monday daily bar captures trade activity from 6pm the day before (Sunday) to 6pm on the Monday. Tuesday's daily bar will run from 6pm Monday to 6pm Tuesday, etc. For either asset class, the last bar, if partial, is built using the minutes of the current day.\nExamples (assuming context.assets exists):\n\ndata.history(context.assets, \"price\", 1, \"1d\") returns the current price.\ndata.history(context.assets, \"volume\", 1, \"1d\") returns the volume since the current day's open, even if it is partial.\ndata.history(context.assets, \"price\", 2, \"1d\") returns yesterday's close price and the current price.\ndata.history(context.assets, \"price\", 6, \"1d\") returns the prices for the previous 5 days and the current price.\n\nMinute History\nUse \"1m\" for the frequency.\nExamples (assuming context.assets exists):\n\ndata.history(context.assets, \"price\", 1, \"1m\") returns the current price.\ndata.history(context.assets, \"price\", 2, \"1m\") returns the previous minute's close price and the current price.\ndata.history(context.assets, \"volume\", 60, \"1m\") returns the volume for the previous 60 minutes.\n\nScheduling\nUse schedule_function to indicate when you want other functions to occur. The functions passed in must take context and data as parameters.",
"def initialize(context):\n context.appl = sid(49051)\n\n # At ebginning of trading week\n # At Market Open, set 10% of portfolio to be apple\n schedule_function(open_positions, \n date_rules.week_start(), \n time_rules.market_open())\n \n # At end of trading week\n # 30 min before market close, dump all apple stock.\n schedule_function(close_positions, \n date_rules.week_end(), \n time_rules.market_close(minutes = 30))\n\ndef open_positions(context, data):\n order_target_percent(context.appl, 0.10)\n\ndef close_positions(context, data):\n order_target_percent(context.appl, 0)",
"Portfolio Information\nYou can get portfolio information and record it!",
"def initialize(context):\n context.amzn = sid(16841)\n context.ibm = sid(3766)\n\n schedule_function(rebalance, \n date_rules.every_day(), \n time_rules.market_open())\n schedule_function(record_vars, \n date_rules.every_day(), \n time_rules.market_close())\n\ndef rebalance(context, data):\n # Half of our portfolio long on amazn\n order_target_percent(context.amzn, 0.50)\n # Half is shorting IBM\n order_target_percent(context.ibm, -0.50)\n\ndef record_vars(context, data):\n\n # Plot the counts\n record(amzn_close=data.current(context.amzn, 'close'))\n record(ibm_close=data.current(context.ibm, 'close'))",
"Slippage and Commision\nSlippage\nSlippage is where a simulation estimates the impact of orders on the fill rate and execution price they receive. When an order is placed for a trade, the market is affected. Buy orders drive prices up, and sell orders drive prices down; this is generally referred to as the price_impact of a trade. Additionally, trade orders do not necessarily fill instantaneously. Fill rates are dependent on the order size and current trading volume of the ordered security. The volume_limit determines the fraction of a security's trading volume that can be used by your algorithm.\nIn backtesting and non-brokerage paper trading (Quantopian paper trading), a slippage model can be specified in initialize() using set_slippage(). There are different builtin slippage models that can be used, as well as the option to set a custom model. By default (if a slippage model is not specified), the following volume share slippage model is used:",
"set_slippage(slippage.VolumeShareSlippage(volume_limit = 0.025, \n price_impact = 0.1))",
"Using the default model, if an order of 60 shares is placed for a given stock, then 1000 shares of that stock trade in each of the next several minutes and the volume_limit is 0.025, then our trade order will be split into three orders (25 shares, 25 shares, and 10 shares) that execute over the next 3 minutes.\nAt the end of each day, all open orders are canceled, so trading liquid stocks is generally a good idea. Additionally, orders placed exactly at market close will not have time to fill, and will be canceled.\nCommision\nTo set the cost of trades, we can specify a commission model in initialize() using set_commission(). By default (if a commission model is not specified), the following commission model is used:",
"set_commission(commission.PerShare(cost = 0.0075, \n min_trade_cost = 1))",
"The default commission model charges 0.0075 dollar per share, with a minimum trade cost of $1.\nSlippage and commission models can have an impact on the performance of a backtest. The default models used by Quantopian are fairly realistic, and it is highly recommended that you use them.\nGreat Job!\nThose are all the basics of Quantopians Tutorial! With these key functions you actually know enough to begin trading!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ivannz/study_notes | year_15_16/fall_2015/game theoretic foundations of ml/labs/SVM-lab.ipynb | mit | [
"Применение машины опорных векторов к выявлению фальшивых купюр\nПодключим необходимые библиотеки.",
"import numpy as np, pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn import *\n%matplotlib inline\n\nrandom_state = np.random.RandomState( None )\n\ndef collect_result( grid_, names = [ ] ) :\n df = pd.DataFrame( { \"2-Отклонение\" : [ np.std(v_[ 2 ] ) for v_ in grid_.grid_scores_ ],\n \"1-Точность\" : [ v_[ 1 ] for v_ in grid_.grid_scores_ ], },\n index = pd.MultiIndex.from_tuples(\n [ v_[ 0 ].values() for v_ in grid_.grid_scores_ ],\n names = names ) )\n df.sort_index( )\n return df",
"Данные были взяты из репозитория UCI Machine Learning Repository по адресу http://archive.ics.uci.edu/ml/datasets/banknote+authentication.\nВыборка сконструирована при помощи вейвлет преобразования избражений фальшивых и аутентичных банкнот в градациях серого.",
"df = pd.read_csv( 'data_banknote_authentication.txt', sep = \",\", decimal = \".\", header = None,\n names = [ \"variance\", \"skewness\", \"curtosis\", \"entropy\", \"class\" ] )\n\ny = df.xs( \"class\", axis = 1 )\nX = df.drop( \"class\", axis = 1 )",
"В исследуемых данных мы имеем следующее число точек:",
"print len( X )",
"Загруженные данные разбиваем на две выборки: обучающую ($\\text{_train}$) и тестовую. которая будет не будет использоваться при обучении ($\\text{_test}$).\nРазобьём выборку на обучающую и тестовую в соотношении 2:3.",
"X_train, X_test, y_train, y_test = cross_validation.train_test_split( X, y, test_size = 0.60,\n random_state = random_state )",
"В обучающей выборке имеем столько наблюдений:",
"print len( X_train )",
"Рассмотрим SVM в линейно неразделимом случае с $L^1$ нормой на зазоры $(\\xi_i){i=1}^n$:\n$$ \\frac{1}{2} \\|\\beta\\|^2 + C \\sum{i=1}^n \\xi_i \\to \\min_{\\beta, \\beta_0, (\\xi_i)_{i=1}^n} \\,, $$\nпри условиях: для любого $i=1,\\ldots,n$ требуется $\\xi_i \\geq 0$ и \n$$ \\bigl( \\beta' \\phi(x_i) + \\beta_0 \\bigr) y_i \\geq 1 - \\xi_i \\,.$$",
"svm_clf_ = svm.SVC( probability = True, max_iter = 100000 )",
"Параметры вида ядра (и соответственно отображений признаков $\\phi:\\mathcal{X}\\to\\mathcal{H}$) и параметр регуляризации $C$ будем искать с помощью переборного поиска на сетке с $5$-fold кроссвалидацией на тренировочной выборке $\\text{X_train}$.\nРассмотрим три ядра: гауссовское\n$$ K( x, y ) = \\text{exp}\\bigl{ -\\frac{1}{2\\gamma^2} \\|x-y\\|^2 \\bigr} \\,,$$",
"## Вид ядра : Гауссовское ядро\ngrid_rbf_ = grid_search.GridSearchCV( svm_clf_, param_grid = {\n## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.\n \"C\" : np.logspace( -4, 1, num = 6 ),\n \"kernel\" : [ \"rbf\" ],\n## Параметр \"концентрации\" Гауссовского ядра\n \"gamma\" : np.logspace( -2, 2, num = 10 ),\n }, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )\ndf_rbf_ = collect_result( grid_rbf_, names = [ \"Ядро\", \"C\", \"Параметр\" ] )",
"полимониальное\n$$ K( x, y ) = \\bigl( 1 + \\langle x, y\\rangle\\bigr)^p \\,, $$",
"## Вид ядра : Полиномиальное ядро\ngrid_poly_ = grid_search.GridSearchCV( svm.SVC( probability = True, max_iter = 20000, kernel = \"poly\" ), param_grid = {\n## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.\n \"C\" : np.logspace( -4, 1, num = 6 ),\n \"kernel\" : [ \"poly\" ], \n## Степень полиномиального ядра\n \"degree\" : [ 2, 3, 5, 7 ],\n }, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )\ndf_poly_ = collect_result( grid_poly_, names = [ \"Ядро\", \"C\", \"Параметр\" ] )",
"и линейное (в $\\mathbb{R}^d$)\n$$ K( x, y ) = \\langle x, y\\rangle \\,,$$",
"## Вид ядра : линейное ядро\ngrid_linear_ = grid_search.GridSearchCV( svm_clf_, param_grid = {\n## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.\n \"C\" : np.logspace( -4, 1, num = 6 ),\n \"kernel\" : [ \"linear\" ],\n \"degree\" : [ 0 ]\n }, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )\ndf_linear_ = collect_result( grid_linear_, names = [ \"Ядро\", \"C\", \"Параметр\" ] )",
"Результаты поиска приведены ниже:",
"pd.concat( [ df_linear_, df_poly_, df_rbf_ ], axis = 0 ).sort_index( )",
"Посмотрим точность лучших моделей в каждом классе ядер на тестовтй выборке.\nЛинейное ядро",
"print grid_linear_.best_estimator_\nprint \"Accuracy: %0.3f%%\" % ( grid_linear_.best_estimator_.score( X_test, y_test ) * 100, )",
"Гауссовское ядро",
"print grid_rbf_.best_estimator_\nprint \"Accuracy: %0.3f%%\" % ( grid_rbf_.best_estimator_.score( X_test, y_test ) * 100, )",
"Полимониальное ядро",
"print grid_poly_.best_estimator_\nprint \"Accuracy: %0.3f%%\" % ( grid_poly_.best_estimator_.score( X_test, y_test ) * 100, )",
"Построим ROC-AUC кривую для лучшей моделей.",
"result_ = { name_: metrics.roc_curve( y_test, estimator_.predict_proba( X_test )[:,1] )\n for name_, estimator_ in {\n \"Linear\": grid_linear_.best_estimator_,\n \"Polynomial\": grid_poly_.best_estimator_,\n \"RBF\": grid_rbf_.best_estimator_ }.iteritems( ) }\n\nfig = plt.figure( figsize = ( 16, 9 ) )\nax = fig.add_subplot( 111 )\nax.set_ylim( -0.1, 1.1 ) ; ax.set_xlim( -0.1, 1.1 )\n\nax.set_xlabel( \"FPR\" ) ; ax.set_ylabel( u\"TPR\" )\nax.set_title( u\"ROC-AUC\" )\n\nfor name_, value_ in result_.iteritems( ) :\n fpr, tpr, _ = value_\n ax.plot( fpr, tpr, lw=2, label = name_ )\n\nax.legend( loc = \"lower right\" )",
"Невероятный результат: на тестовой выборке достигается точность $\\geq 99\\%$. И SVM порождает почти идеальный классификатор! Так уж леги данные."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
xpharry/Udacity-DLFoudation | your-first-network/.ipynb_checkpoints/dlnd-your-first-neural-network-checkpoint.ipynb | mit | [
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n \n #### Set this to your implemented sigmoid function ####\n # Activation function is the sigmoid function\n self.activation_function = self.sigmoid\n \n def sigmoid(self, x):\n return 1 / (1 + np.exp(-x))\n \n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array\n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer\n final_outputs = self.activation_function(final_inputs) # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n \n # TODO: Output error\n output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Backpropagated error\n hidden_errors = np.dot(self.weights_hidden_to_output, output_error) # errors propagated to the hidden layer\n hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients\n \n # TODO: Update the weights\n self.weights_hidden_to_output += self.lr * np.dot(hidden_outputs, output_errors).T # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * np.dot(inputs, hidden_errors * hidden_grad).T # update input-to-hidden weights with gradient descent step\n \n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer\n final_outputs = self.activation_function(final_inputs) # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\nepochs = 1000\nlearning_rate = 0.05\nhidden_nodes = 3\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=0.5)",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"Thinking about your results\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nUnit tests\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\ntest_w_i_h = np.array([[0.1, 0.4, -0.3], \n [-0.2, 0.5, 0.2]])\ntest_w_h_o = np.array([[0.3, -0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328, -0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, 0.39775194, -0.29887597],\n [-0.20185996, 0.50074398, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ktmud/deep-learning | student-admissions/StudentAdmissions.ipynb | mit | [
"Predicting Student Admissions with Neural Networks\nIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:\n- GRE Scores (Test)\n- GPA Scores (Grades)\n- Class rank (1-4)\nThe dataset originally came from here: http://www.ats.ucla.edu/\nLoading the data\nTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:\n- https://pandas.pydata.org/pandas-docs/stable/\n- https://docs.scipy.org/",
"# Importing pandas and numpy\nimport pandas as pd\nimport numpy as np\n\n# Reading the csv file into a pandas DataFrame\ndata = pd.read_csv('student_data.csv')\n\n# Printing out the first 10 rows of our data\ndata[:10]",
"Plotting the data\nFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.",
"# Importing matplotlib\nimport matplotlib.pyplot as plt\n\n# Function to help us plot\ndef plot_points(data):\n X = np.array(data[[\"gre\",\"gpa\"]])\n y = np.array(data[\"admit\"])\n admitted = X[np.argwhere(y==1)]\n rejected = X[np.argwhere(y==0)]\n plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')\n plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')\n plt.xlabel('Test (GRE)')\n plt.ylabel('Grades (GPA)')\n \n# Plotting the points\nplot_points(data)\nplt.show()",
"Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.",
"# Separating the ranks\ndata_rank1 = data[data[\"rank\"]==1]\ndata_rank2 = data[data[\"rank\"]==2]\ndata_rank3 = data[data[\"rank\"]==3]\ndata_rank4 = data[data[\"rank\"]==4]\n\n# Plotting the graphs\nplot_points(data_rank1)\nplt.title(\"Rank 1\")\nplt.show()\nplot_points(data_rank2)\nplt.title(\"Rank 2\")\nplt.show()\nplot_points(data_rank3)\nplt.title(\"Rank 3\")\nplt.show()\nplot_points(data_rank4)\nplt.title(\"Rank 4\")\nplt.show()",
"This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.\nTODO: One-hot encoding the rank\nUse the get_dummies function in numpy in order to one-hot encode the data.",
"# TODO: Make dummy variables for rank\none_hot_data = pass\n\n# TODO: Drop the previous rank column\none_hot_data = pass\n\n# Print the first 10 rows of our data\none_hot_data[:10]",
"TODO: Scaling the data\nThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.",
"# Making a copy of our data\nprocessed_data = one_hot_data[:]\n\n# TODO: Scale the columns\n\n# Printing the first 10 rows of our procesed data\nprocessed_data[:10]",
"Splitting the data into Training and Testing\nIn order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.",
"sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)\ntrain_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)\n\nprint(\"Number of training samples is\", len(train_data))\nprint(\"Number of testing samples is\", len(test_data))\nprint(train_data[:10])\nprint(test_data[:10])",
"Splitting the data into features and targets (labels)\nNow, as a final step before the training, we'll split the data into features (X) and targets (y).",
"features = train_data.drop('admit', axis=1)\ntargets = train_data['admit']\nfeatures_test = test_data.drop('admit', axis=1)\ntargets_test = test_data['admit']\n\nprint(features[:10])\nprint(targets[:10])",
"Training the 2-layer Neural Network\nThe following function trains the 2-layer neural network. First, we'll write some helper functions.",
"# Activation (sigmoid) function\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))\ndef sigmoid_prime(x):\n return sigmoid(x) * (1-sigmoid(x))\ndef error_formula(y, output):\n return - y*np.log(output) - (1 - y) * np.log(1-output)",
"TODO: Backpropagate the error\nNow it's your turn to shine. Write the error term. Remember that this is given by the equation $$ -(y-\\hat{y}) \\sigma'(x) $$",
"# TODO: Write the error term formula\ndef error_term_formula(y, output):\n pass\n\n# Neural Network hyperparameters\nepochs = 1000\nlearnrate = 0.5\n\n# Training function\ndef train_nn(features, targets, epochs, learnrate):\n \n # Use to same seed to make debugging easier\n np.random.seed(42)\n\n n_records, n_features = features.shape\n last_loss = None\n\n # Initialize weights\n weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n\n for e in range(epochs):\n del_w = np.zeros(weights.shape)\n for x, y in zip(features.values, targets):\n # Loop through all records, x is the input, y is the target\n\n # Activation of the output unit\n # Notice we multiply the inputs and the weights here \n # rather than storing h as a separate variable \n output = sigmoid(np.dot(x, weights))\n\n # The error, the target minus the network output\n error = error_formula(y, output)\n\n # The error term\n # Notice we calulate f'(h) here instead of defining a separate\n # sigmoid_prime function. This just makes it faster because we\n # can re-use the result of the sigmoid function stored in\n # the output variable\n error_term = error_term_formula(y, output)\n\n # The gradient descent step, the error times the gradient times the inputs\n del_w += error_term * x\n\n # Update the weights here. The learning rate times the \n # change in weights, divided by the number of records to average\n weights += learnrate * del_w / n_records\n\n # Printing out the mean square error on the training set\n if e % (epochs / 10) == 0:\n out = sigmoid(np.dot(features, weights))\n loss = np.mean((out - targets) ** 2)\n print(\"Epoch:\", e)\n if last_loss and last_loss < loss:\n print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n else:\n print(\"Train loss: \", loss)\n last_loss = loss\n print(\"=========\")\n print(\"Finished training!\")\n return weights\n \nweights = train_nn(features, targets, epochs, learnrate)",
"Calculating the Accuracy on the Test Data",
"# Calculate accuracy on test data\ntes_out = sigmoid(np.dot(features_test, weights))\npredictions = tes_out > 0.5\naccuracy = np.mean(predictions == targets_test)\nprint(\"Prediction accuracy: {:.3f}\".format(accuracy))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kit-cel/wt | nt1/vorlesung/extra/dsss.ipynb | gpl-2.0 | [
"Content and Objectives\n\nShow spreading in time and frequency domain\nBPSk symbols are being pulse-shaped by rectangular w. and wo. spreading\n\nImport",
"# importing\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n\n# showing figures inline\n%matplotlib inline\n\n# plotting options \nfont = {'size' : 20}\nplt.rc('font', **font)\nplt.rc('text', usetex=True)\n\nmatplotlib.rc('figure', figsize=(18, 6) )",
"Parameters",
"# number of realizations along which to average the psd estimate\nn_real = 100\n\n# modulation scheme and constellation points\nconstellation = [ -1, 1 ]\n\n# number of symbols \nn_symb = 100\nt_symb = 1.0 \n\nchips_per_symbol = 8\nsamples_per_chip = 8\nsamples_per_symbol = samples_per_chip * chips_per_symbol\n\n\n# parameters for frequency regime\nN_fft = 512\nomega = np.linspace( -np.pi, np.pi, N_fft )\nf_vec = omega / ( 2 * np.pi * t_symb / samples_per_symbol )",
"Real data-modulated Tx-signal",
"# define rectangular function responses \nrect = np.ones( samples_per_symbol )\nrect /= np.linalg.norm( rect )\n\n\n# number of realizations along which to average the psd estimate\nn_real = 10\n\n\n# initialize two-dimensional field for collecting several realizations along which to average \nRECT_PSD = np.zeros( (n_real, N_fft ) ) \nDSSS_PSD = np.zeros( (n_real, N_fft ) )\n\n\n# get chips and signature\n\n# NOTE: looping until number of +-1 chips in | sum ones - 0.5 N_chips | < 0.2 N_chips,\n# i.e., number of +1,-1 is approximately 1/2 (up to 20 percent)\nwhile True:\n dsss_chips = (-1) ** np.random.randint( 0, 2, size = chips_per_symbol )\n\n if np.abs( np.sum( dsss_chips > 0) - chips_per_symbol/2 ) / chips_per_symbol < .2:\n break\n\n# generate signature out of chips by putting samples_per_symbol samples with chip amplitude \n# normalize signature to energy 1\ndsss_signature = np.ones( samples_per_symbol )\nfor n in range( chips_per_symbol ):\n dsss_signature[ n * samples_per_chip : (n+1) * samples_per_chip ] *= dsss_chips[ n ] \ndsss_signature /= np.linalg.norm( dsss_signature ) \n \n \n# activate switch if chips should be resampled for every simulation\n# this would average (e.g., for PSD) instead of showing \"one reality\"\nnew_chips_per_sim = 1\n \n \n# loop for realizations\nfor k in np.arange( n_real ):\n\n if new_chips_per_sim:\n \n # resample signature using identical method as above\n while True:\n dsss_chips = (-1) ** np.random.randint( 0, 2, size = chips_per_symbol )\n if np.abs( np.sum( dsss_chips > 0) - chips_per_symbol/2 ) / chips_per_symbol < .2:\n break\n \n # get signature \n dsss_signature = np.ones( samples_per_symbol )\n for n in range( chips_per_symbol ):\n dsss_signature[ n * samples_per_chip : (n+1) * samples_per_chip ] *= dsss_chips[ n ]\n dsss_signature /= np.linalg.norm( dsss_signature ) \n \n # generate random binary vector and modulate\n data = np.random.randint( 2, size = n_symb )\n mod = [ constellation[ d ] for d in data ]\n\n # get signals by putting symbols and filtering\n s_up = np.zeros( n_symb * samples_per_symbol ) \n s_up[ :: samples_per_symbol ] = mod\n\n \n # apply RECTANGULAR and CDMA shaping in time domain\n s_rect = np.convolve( rect, s_up ) \n s_dsss = np.convolve( dsss_signature, s_up )\n\n \n # get spectrum \n RECT_PSD[ k, :] = np.abs( np.fft.fftshift( np.fft.fft( s_rect, N_fft ) ) )**2\n DSSS_PSD[ k, :] = np.abs( np.fft.fftshift( np.fft.fft( s_dsss, N_fft ) ) )**2\n\n# average along realizations\nRECT_av = np.average( RECT_PSD, axis=0 )\nRECT_av /= np.max( RECT_av )\n\nDSSS_av = np.average( DSSS_PSD, axis=0 )\nDSSS_av /= np.max( DSSS_av )\n\n# show limited amount of symbols in time domain\nN_syms_plot = 5\nt_plot = np.arange( 0, N_syms_plot * t_symb, t_symb / samples_per_symbol )\n\n\n# plot\nplt.figure()\n\nplt.subplot(121)\nplt.plot( t_plot, s_rect[ : N_syms_plot * samples_per_symbol], linewidth=2.0, label='Rect') \nplt.plot( t_plot, s_dsss[ : N_syms_plot * samples_per_symbol ], linewidth=2.0, label='DS-SS') \n\nplt.ylim( (-1.1, 1.1 ) ) \nplt.grid( True )\nplt.legend(loc='upper right') \nplt.xlabel('$t/T$')\nplt.title('$s(t)$')\n\nplt.subplot(122)\n\nnp.seterr(divide='ignore') # ignore warning for logarithm of 0\nplt.plot( f_vec, 10*np.log10( RECT_av ), linewidth=2.0, label='Rect., sim.' ) \nplt.plot( f_vec, 10*np.log10( DSSS_av ), linewidth=2.0, label='DS-SS, sim.' ) \nnp.seterr(divide='warn') # enable warning for logarithm of 0\n\nplt.grid(True) \nplt.legend(loc='lower right') \nplt.ylim( (-60, 10 ) )\n\nplt.xlabel('$fT$')\nplt.title('$|S(f)|^2$')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Jay-Jay-D/LeanSTP | Jupyter/KitchenSinkQuantBookTemplate.ipynb | apache-2.0 | [
"Welcome to The QuantConnect Research Page\nRefer to this page for documentation https://www.quantconnect.com/docs#Introduction-to-Jupyter\nContribute to this template file https://github.com/QuantConnect/Lean/blob/master/Jupyter/BasicQuantBookTemplate.ipynb\nQuantBook Basics\nStart QuantBook\n\nAdd the references and imports\nCreate a QuantBook instance",
"%matplotlib inline\n# Imports\nfrom clr import AddReference\nAddReference(\"System\")\nAddReference(\"QuantConnect.Common\")\nAddReference(\"QuantConnect.Jupyter\")\nAddReference(\"QuantConnect.Indicators\")\nfrom System import *\nfrom QuantConnect import *\nfrom QuantConnect.Data.Custom import *\nfrom QuantConnect.Data.Market import TradeBar, QuoteBar\nfrom QuantConnect.Jupyter import *\nfrom QuantConnect.Indicators import *\nfrom datetime import datetime, timedelta\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\n# Create an instance\nqb = QuantBook()",
"Selecting Asset Data\nCheckout the QuantConnect docs to learn how to select asset data.",
"spy = qb.AddEquity(\"SPY\")\neur = qb.AddForex(\"EURUSD\")\nbtc = qb.AddCrypto(\"BTCUSD\")\nfxv = qb.AddData[FxcmVolume](\"EURUSD_Vol\", Resolution.Hour)",
"Historical Data Requests\nWe can use the QuantConnect API to make Historical Data Requests. The data will be presented as multi-index pandas.DataFrame where the first index is the Symbol.\nFor more information, please follow the link.",
"# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution\nh1 = qb.History(qb.Securities.Keys, 360, Resolution.Daily)\n\n# Plot closing prices from \"SPY\" \nh1.loc[\"SPY\"][\"close\"].plot()\n\n# Gets historical data from the subscribed assets, from the last 30 days with daily resolution\nh2 = qb.History(qb.Securities.Keys, timedelta(360), Resolution.Daily)\n\n# Plot high prices from \"EURUSD\" \nh2.loc[\"EURUSD\"][\"high\"].plot()\n\n# Gets historical data from the subscribed assets, between two dates with daily resolution\nh3 = qb.History([btc.Symbol], datetime(2014,1,1), datetime.now(), Resolution.Daily)\n\n# Plot closing prices from \"BTCUSD\" \nh3.loc[\"BTCUSD\"][\"close\"].plot()\n\n# Only fetchs historical data from a desired symbol\nh4 = qb.History([spy.Symbol], 360, Resolution.Daily)\n# or qb.History([\"SPY\"], 360, Resolution.Daily)\n\n# Only fetchs historical data from a desired symbol\nh5 = qb.History([eur.Symbol], timedelta(360), Resolution.Daily)\n# or qb.History([\"EURUSD\"], timedelta(30), Resolution.Daily)\n\n# Fetchs custom data\nh6 = qb.History([fxv.Symbol], timedelta(360))\nh6.loc[fxv.Symbol.Value][\"volume\"].plot()",
"Historical Options Data Requests\n\nSelect the option data\nSets the filter, otherwise the default will be used SetFilter(-1, 1, timedelta(0), timedelta(35))\nGet the OptionHistory, an object that has information about the historical options data",
"goog = qb.AddOption(\"GOOG\")\ngoog.SetFilter(-2, 2, timedelta(0), timedelta(180))\n\noption_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4))\nprint (option_history.GetStrikes())\nprint (option_history.GetExpiryDates())\nh7 = option_history.GetAllData()",
"Historical Future Data Requests\n\nSelect the future data\nSets the filter, otherwise the default will be used SetFilter(timedelta(0), timedelta(35))\nGet the FutureHistory, an object that has information about the historical future data",
"es = qb.AddFuture(\"ES\")\nes.SetFilter(timedelta(0), timedelta(180))\n\nfuture_history = qb.GetFutureHistory(es.Symbol, datetime(2017, 1, 4))\nprint (future_history.GetExpiryDates())\nh7 = future_history.GetAllData()",
"Get Fundamental Data\n\nGetFundamental([symbol], selector, start_date = datetime(1998,1,1), end_date = datetime.now())\n\nWe will get a pandas.DataFrame with fundamental data.",
"data = qb.GetFundamental([\"AAPL\",\"AIG\",\"BAC\",\"GOOG\",\"IBM\"], \"ValuationRatios.PERatio\")\ndata",
"Indicators\nWe can easily get the indicator of a given symbol with QuantBook. \nFor all indicators, please checkout QuantConnect Indicators Reference Table",
"# Example with BB, it is a datapoint indicator\n# Define the indicator\nbb = BollingerBands(30, 2)\n\n# Gets historical data of indicator\nbbdf = qb.Indicator(bb, \"SPY\", 360, Resolution.Daily)\n\n# drop undesired fields\nbbdf = bbdf.drop('standarddeviation', 1)\n\n# Plot\nbbdf.plot()\n\n# For EURUSD\nbbdf = qb.Indicator(bb, \"EURUSD\", 360, Resolution.Daily)\nbbdf = bbdf.drop('standarddeviation', 1)\nbbdf.plot()\n\n# Example with ADX, it is a bar indicator\nadx = AverageDirectionalIndex(\"adx\", 14)\nadxdf = qb.Indicator(adx, \"SPY\", 360, Resolution.Daily)\nadxdf.plot()\n\n# For EURUSD\nadxdf = qb.Indicator(adx, \"EURUSD\", 360, Resolution.Daily)\nadxdf.plot()\n\n# Example with ADO, it is a tradebar indicator (requires volume in its calculation)\nado = AccumulationDistributionOscillator(\"ado\", 5, 30)\nadodf = qb.Indicator(ado, \"SPY\", 360, Resolution.Daily)\nadodf.plot()\n\n# For EURUSD. \n# Uncomment to check that this SHOULD fail, since Forex is data type is not TradeBar.\n# adodf = qb.Indicator(ado, \"EURUSD\", 360, Resolution.Daily)\n# adodf.plot()\n\n# SMA cross:\nsymbol = \"EURUSD\"\n# Get History \nhist = qb.History([symbol], 500, Resolution.Daily)\n# Get the fast moving average\nfast = qb.Indicator(SimpleMovingAverage(50), symbol, 500, Resolution.Daily)\n# Get the fast moving average\nslow = qb.Indicator(SimpleMovingAverage(200), symbol, 500, Resolution.Daily)\n\n# Remove undesired columns and rename others \nfast = fast.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'fast'})\nslow = slow.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'slow'})\n\n# Concatenate the information and plot \ndf = pd.concat([hist.loc[symbol][\"close\"], fast, slow], axis=1).dropna(axis=0)\ndf.plot()\n\n# Get indicator defining a lookback period in terms of timedelta\nema1 = qb.Indicator(ExponentialMovingAverage(50), \"SPY\", timedelta(100), Resolution.Daily)\n# Get indicator defining a start and end date\nema2 = qb.Indicator(ExponentialMovingAverage(50), \"SPY\", datetime(2016,1,1), datetime(2016,10,1), Resolution.Daily)\n\nema = pd.concat([ema1, ema2], axis=1)\nema.plot()\n\nrsi = RelativeStrengthIndex(14)\n\n# Selects which field we want to use in our indicator (default is Field.Close)\nrsihi = qb.Indicator(rsi, \"SPY\", 360, Resolution.Daily, Field.High)\nrsilo = qb.Indicator(rsi, \"SPY\", 360, Resolution.Daily, Field.Low)\nrsihi = rsihi.rename(columns={'relativestrengthindex': 'high'})\nrsilo = rsilo.rename(columns={'relativestrengthindex': 'low'})\nrsi = pd.concat([rsihi['high'], rsilo['low']], axis=1)\nrsi.plot()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
johnbachman/emcee | docs/_static/notebooks/quickstart.ipynb | mit | [
"%matplotlib inline\n\n%config InlineBackend.figure_format = \"retina\"\n\nfrom matplotlib import rcParams\nrcParams[\"savefig.dpi\"] = 100\nrcParams[\"figure.dpi\"] = 100\nrcParams[\"font.size\"] = 20",
"Quickstart\nThe easiest way to get started with using emcee is to use it for a project. To get you started, here’s an annotated, fully-functional example that demonstrates a standard usage pattern.\nHow to sample a multi-dimensional Gaussian\nWe’re going to demonstrate how you might draw samples from the multivariate Gaussian density given by:\n$$\np(\\vec{x}) \\propto \\exp \\left [ - \\frac{1}{2} (\\vec{x} -\n \\vec{\\mu})^\\mathrm{T} \\, \\Sigma ^{-1} \\, (\\vec{x} - \\vec{\\mu})\n \\right ]\n$$\nwhere $\\vec{\\mu}$ is an $N$-dimensional vector position of the mean of the density and $\\Sigma$ is the square N-by-N covariance matrix.\nThe first thing that we need to do is import the necessary modules:",
"import numpy as np",
"Then, we’ll code up a Python function that returns the density $p(\\vec{x})$ for specific values of $\\vec{x}$, $\\vec{\\mu}$ and $\\Sigma^{-1}$. In fact, emcee actually requires the logarithm of $p$. We’ll call it log_prob:",
"def log_prob(x, mu, cov):\n diff = x - mu\n return -0.5 * np.dot(diff, np.linalg.solve(cov, diff))",
"It is important that the first argument of the probability function is\nthe position of a single \"walker\" (a N dimensional\nnumpy array). The following arguments are going to be constant every\ntime the function is called and the values come from the args parameter\nof our :class:EnsembleSampler that we'll see soon.\nNow, we'll set up the specific values of those \"hyperparameters\" in 5\ndimensions:",
"ndim = 5\n\nnp.random.seed(42)\nmeans = np.random.rand(ndim)\n\ncov = 0.5 - np.random.rand(ndim ** 2).reshape((ndim, ndim))\ncov = np.triu(cov)\ncov += cov.T - np.diag(cov.diagonal())\ncov = np.dot(cov, cov)",
"and where cov is $\\Sigma$.\nHow about we use 32 walkers? Before we go on, we need to guess a starting point for each\nof the 32 walkers. This position will be a 5-dimensional vector so the\ninitial guess should be a 32-by-5 array.\nIt's not a very good guess but we'll just guess a\nrandom number between 0 and 1 for each component:",
"nwalkers = 32\np0 = np.random.rand(nwalkers, ndim)",
"Now that we've gotten past all the bookkeeping stuff, we can move on to\nthe fun stuff. The main interface provided by emcee is the\n:class:EnsembleSampler object so let's get ourselves one of those:",
"import emcee\n\nsampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, args=[means, cov])",
"Remember how our function log_prob required two extra arguments when it\nwas called? By setting up our sampler with the args argument, we're\nsaying that the probability function should be called as:",
"log_prob(p0[0], means, cov)",
"If we didn't provide any\nargs parameter, the calling sequence would be log_prob(p0[0]) instead.\nIt's generally a good idea to run a few \"burn-in\" steps in your MCMC\nchain to let the walkers explore the parameter space a bit and get\nsettled into the maximum of the density. We'll run a burn-in of 100\nsteps (yep, I just made that number up... it's hard to really know\nhow many steps of burn-in you'll need before you start) starting from\nour initial guess p0:",
"state = sampler.run_mcmc(p0, 100)\nsampler.reset()",
"You'll notice that I saved the final position of the walkers (after the\n100 steps) to a variable called state. You can check out what will be\ncontained in the other output variables by looking at the documentation for\nthe :func:EnsembleSampler.run_mcmc function. The call to the\n:func:EnsembleSampler.reset method clears all of the important bookkeeping\nparameters in the sampler so that we get a fresh start. It also clears the\ncurrent positions of the walkers so it's a good thing that we saved them\nfirst.\nNow, we can do our production run of 10000 steps:",
"sampler.run_mcmc(state, 10000);",
"The samples can be accessed using the :func:EnsembleSampler.get_chain method.\nThis will return an array\nwith the shape (10000, 32, 5) giving the parameter values for each walker\nat each step in the chain.\nTake note of that shape and make sure that you know where each of those numbers come from.\nYou can make histograms of these samples to get an estimate of the density that you were sampling:",
"import matplotlib.pyplot as plt\n\nsamples = sampler.get_chain(flat=True)\nplt.hist(samples[:, 0], 100, color=\"k\", histtype=\"step\")\nplt.xlabel(r\"$\\theta_1$\")\nplt.ylabel(r\"$p(\\theta_1)$\")\nplt.gca().set_yticks([]);",
"Another good test of whether or not the sampling went well is to check\nthe mean acceptance fraction of the ensemble using the\n:func:EnsembleSampler.acceptance_fraction property:",
"print(\"Mean acceptance fraction: {0:.3f}\".format(np.mean(sampler.acceptance_fraction)))",
"and the integrated autocorrelation time (see the :ref:autocorr tutorial for more details)",
"print(\n \"Mean autocorrelation time: {0:.3f} steps\".format(\n np.mean(sampler.get_autocorr_time())\n )\n)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turi-code/tutorials | dss-2016/churn_prediction/churn-tutorial.ipynb | apache-2.0 | [
"Forecasting customer churn\nChurn prediction is the task of identifying users that are likely to stop using a service, product or website. In this notebook, you will learn how to:\nTrain & consume a model to forecast user churn\n\nDefine the boundary at which churn happens.\nDefine a churn period.\nTrain a model using data from the past.\nMake predictions for probability of churn for each user.\n\nLet's get started!",
"import graphlab as gl\nimport datetime\ngl.canvas.set_target('ipynb') # make sure plots appear inline",
"Load previously saved data\nIn the previous notebook, we had saved the data in a binary format. Let us try and load the data back.",
"interactions_ts = gl.TimeSeries(\"data/user_activity_data.ts/\")\nusers = gl.SFrame(\"data/users.sf/\")",
"Training a churn predictor\nWe define churn to be no activity within a period of time (called the churn_period). Hence,\na user/customer is said to have churned if periods of activity is followed\nby no activity for a churn_period (for example, 30 days). \n<img src=\"https://dato.com/learn/userguide/churn_prediction/images/churn-illustration.png\", align=\"left\">",
"churn_period_oct = datetime.datetime(year = 2011, month = 10, day = 1)",
"Making a train-validation split\nNext, we perform a train-validation split where we randomly split the data such that one split contains data for a fraction of the users while the second split contains all data for the rest of the users.",
"(train, valid) = gl.churn_predictor.random_split(interactions_ts, user_id = 'CustomerID', fraction = 0.9, seed = 12)\n\nprint \"Users in the training dataset : %s\" % len(train['CustomerID'].unique())\nprint \"Users in the validation dataset : %s\" % len(valid['CustomerID'].unique())",
"Training a churn predictor model",
"model = gl.churn_predictor.create(train, user_id='CustomerID', \n user_data = users, time_boundaries = [churn_period_oct])\n\nmodel",
"Consuming predictions made by the model\nHere the question to ask is will they churn after a certain period of time. To validate we can see if they user has used us after that evaluation period. Voila! I was confusing it with expiration time (customer churn not usage churn)",
"predictions = model.predict(valid, user_data=users)\npredictions\n\npredictions['probability'].show()",
"Evaluating the model",
"metrics = model.evaluate(valid, user_data=users, time_boundary=churn_period_oct)\nmetrics\n\nmodel.save('data/churn_model.mdl')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_info.ipynb | bsd-3-clause | [
"%matplotlib inline",
"The :class:Info <mne.Info> data structure\nThe :class:Info <mne.Info> data object is typically created\nwhen data is imported into MNE-Python and contains details such as:\n\ndate, subject information, and other recording details\nthe sampling rate\ninformation about the data channels (name, type, position, etc.)\ndigitized points\nsensor–head coordinate transformation matrices\n\nand so forth. See the :class:the API reference <mne.Info>\nfor a complete list of all data fields. Once created, this object is passed\naround throughout the data analysis pipeline.",
"import mne\nimport os.path as op",
":class:mne.Info behaves as a nested Python dictionary:",
"# Read the info object from an example recording\ninfo = mne.io.read_info(\n op.join(mne.datasets.sample.data_path(), 'MEG', 'sample',\n 'sample_audvis_raw.fif'), verbose=False)",
"List all the fields in the info object",
"print('Keys in info dictionary:\\n', info.keys())",
"Obtain the sampling rate of the data",
"print(info['sfreq'], 'Hz')",
"List all information about the first data channel",
"print(info['chs'][0])",
"Obtaining subsets of channels\nThere are a number of convenience functions to obtain channel indices, given\nan :class:mne.Info object.\nGet channel indices by name",
"channel_indices = mne.pick_channels(info['ch_names'], ['MEG 0312', 'EEG 005'])",
"Get channel indices by regular expression",
"channel_indices = mne.pick_channels_regexp(info['ch_names'], 'MEG *')",
"Channel types\nMNE supports different channel types:\n\neeg : For EEG channels with data stored in Volts (V)\nmeg (mag) : For MEG magnetometers channels stored in Tesla (T)\nmeg (grad) : For MEG gradiometers channels stored in Tesla/Meter (T/m)\necg : For ECG channels stored in Volts (V)\nseeg : For Stereotactic EEG channels in Volts (V).\necog : For Electrocorticography (ECoG) channels in Volts (V).\nfnirs (HBO) : Functional near-infrared spectroscopy oxyhemoglobin data.\nfnirs (HBR) : Functional near-infrared spectroscopy deoxyhemoglobin data.\nemg : For EMG channels stored in Volts (V)\nbio : For biological channels (AU).\nstim : For the stimulus (a.k.a. trigger) channels (AU)\nresp : For the response-trigger channel (AU)\nchpi : For HPI coil channels (T).\nexci : Flux excitation channel used to be a stimulus channel.\nias : For Internal Active Shielding data (maybe on Triux only).\nsyst : System status channel information (on Triux systems only).\n\nGet channel indices by type",
"channel_indices = mne.pick_types(info, meg=True) # MEG only\nchannel_indices = mne.pick_types(info, eeg=True) # EEG only",
"MEG gradiometers and EEG channels",
"channel_indices = mne.pick_types(info, meg='grad', eeg=True)",
"Get a dictionary of channel indices, grouped by channel type",
"channel_indices_by_type = mne.io.pick.channel_indices_by_type(info)\nprint('The first three magnetometers:', channel_indices_by_type['mag'][:3])",
"Obtaining information about channels",
"# Channel type of a specific channel\nchannel_type = mne.io.pick.channel_type(info, 75)\nprint('Channel #75 is of type:', channel_type)",
"Channel types of a collection of channels",
"meg_channels = mne.pick_types(info, meg=True)[:10]\nchannel_types = [mne.io.pick.channel_type(info, ch) for ch in meg_channels]\nprint('First 10 MEG channels are of type:\\n', channel_types)",
"Dropping channels from an info structure\nIt is possible to limit the info structure to only include a subset of\nchannels with the :func:mne.pick_info function:",
"# Only keep EEG channels\neeg_indices = mne.pick_types(info, meg=False, eeg=True)\nreduced_info = mne.pick_info(info, eeg_indices)\n\nprint(reduced_info)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
computational-class/cjc2016 | code/04.PythonCrawler_selenium.ipynb | mit | [
"数据抓取\n\n使用Selenium操纵浏览器\n\n\n\n王成军 \[email protected]\n计算传播网 http://computational-communication.com\nSelenium 是一套完整的web应用程序测试系统,包含了\n- 测试的录制(selenium IDE)\n- 编写及运行(Selenium Remote Control)\n- 测试的并行处理(Selenium Grid)。\nSelenium的核心Selenium Core基于JsUnit,完全由JavaScript编写,因此可以用于任何支持JavaScript的浏览器上。selenium可以模拟真实浏览器,自动化测试工具,支持多种浏览器,爬虫中主要用来解决JavaScript渲染问题。https://www.cnblogs.com/zhaof/p/6953241.html\n上面我们知道了selenium支持很多的浏览器,但是如果想要声明并调用浏览器则需要:\nhttps://pypi.org/project/selenium/",
"!pip install selenium",
"Webdriver\n\n主要用的是selenium的Webdriver\n我们可以通过下面的方式先看看Selenium.Webdriver支持哪些浏览器",
"from selenium import webdriver\n\nhelp(webdriver) ",
"下载和设置Webdriver\n对于Chrome需要的webdriver下载地址\nhttp://chromedriver.storage.googleapis.com/index.html\n需要将webdriver放在系统路径下:\n- 确保anaconda在系统路径名里\n- 把下载的webdriver 放在Anaconda的bin文件夹下\nPhantomJS\nPhantomJS是一个而基于WebKit的服务端JavaScript API,支持Web而不需要浏览器支持,其快速、原生支持各种Web标准:Dom处理,CSS选择器,JSON等等。PhantomJS可以用用于页面自动化、网络监测、网页截屏,以及无界面测试",
"#browser = webdriver.Firefox() # 打开Firefox浏览器\nbrowser = webdriver.Chrome() # 打开Chrome浏览器",
"访问页面",
"from selenium import webdriver\n\nbrowser = webdriver.Chrome()\n \nbrowser.get(\"http://music.163.com\") \nprint(browser.page_source)\n#browser.close() ",
"查找元素\n单个元素查找",
"from selenium import webdriver\n\nbrowser = webdriver.Chrome()\n\nbrowser.get(\"http://music.163.com\")\ninput_first = browser.find_element_by_id(\"g_search\")\ninput_second = browser.find_element_by_css_selector(\"#g_search\")\ninput_third = browser.find_element_by_xpath('//*[@id=\"g_search\"]')\nprint(input_first)\nprint(input_second)\nprint(input_third)",
"这里我们通过三种不同的方式去获取响应的元素,第一种是通过id的方式,第二个中是CSS选择器,第三种是xpath选择器,结果都是相同的。\n常用的查找元素方法:\n\nfind_element_by_name\nfind_element_by_id\nfind_element_by_xpath\nfind_element_by_link_text\nfind_element_by_partial_link_text\nfind_element_by_tag_name\nfind_element_by_class_name\nfind_element_by_css_selector",
"# 下面这种方式是比较通用的一种方式:这里需要记住By模块所以需要导入\nfrom selenium.webdriver.common.by import By\n\nbrowser = webdriver.Chrome()\nbrowser.get(\"http://music.163.com\")\ninput_first = browser.find_element(By.ID,\"g_search\")\nprint(input_first)\nbrowser.close()",
"多个元素查找\n其实多个元素和单个元素的区别,举个例子:find_elements,单个元素是find_element,其他使用上没什么区别,通过其中的一个例子演示:",
"browser = webdriver.Chrome()\nbrowser.get(\"http://music.163.com\")\nlis = browser.find_elements_by_css_selector('body')\nprint(lis)\nbrowser.close() ",
"当然上面的方式也是可以通过导入from selenium.webdriver.common.by import By 这种方式实现\n\nlis = browser.find_elements(By.CSS_SELECTOR,'.service-bd li')\n\n同样的在单个元素中查找的方法在多个元素查找中同样存在:\n- find_elements_by_name\n- find_elements_by_id\n- find_elements_by_xpath\n- find_elements_by_link_text\n- find_elements_by_partial_link_text\n- find_elements_by_tag_name\n- find_elements_by_class_name\n- find_elements_by_css_selector\n元素交互操作\n对于获取的元素调用交互方法",
"from selenium import webdriver\nimport time\nbrowser = webdriver.Chrome()\n\nbrowser.get(\"https://music.163.com/\")\ninput_str = browser.find_element_by_id('srch')\ninput_str.send_keys(\"周杰伦\")\ntime.sleep(3) #休眠,模仿人工搜索\ninput_str.clear()\ninput_str.send_keys(\"林俊杰\")",
"运行的结果可以看出程序会自动打开Chrome浏览器并打开淘宝输入ipad,然后删除,重新输入MacBook pro,并点击搜索\nSelenium所有的api文档:http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains\n执行JavaScript\n这是一个非常有用的方法,这里就可以直接调用js方法来实现一些操作,\n下面的例子是通过登录知乎然后通过js翻到页面底部,并弹框提示",
"from selenium import webdriver\nbrowser = webdriver.Chrome()\nbrowser.get(\"https://www.zhihu.com/explore/\")\nbrowser.execute_script('window.scrollTo(0, document.body.scrollHeight)')\nbrowser.execute_script('alert(\"To Bottom\")')",
"一个例子\n```pyton\nfrom selenium import webdriver\nbrowser = webdriver.Chrome()\nbrowser.get(\"https://www.privco.com/home/login\") #需要翻墙打开网址\nusername = 'fake_username'\npassword = 'fake_password'\nbrowser.find_element_by_id(\"username\").clear()\nbrowser.find_element_by_id(\"username\").send_keys(username) \nbrowser.find_element_by_id(\"password\").clear()\nbrowser.find_element_by_id(\"password\").send_keys(password)\nbrowser.find_element_by_css_selector(\"#login-form > div:nth-child(5) > div > button\").click()\n```",
"# url = \"https://www.privco.com/private-company/329463\"\ndef download_excel(url):\n browser.get(url)\n name = url.split('/')[-1]\n title = browser.title\n source = browser.page_source\n with open(name+'.html', 'w') as f:\n f.write(source)\n try:\n soup = BeautifulSoup(source, 'html.parser')\n url_new = soup.find('span', {'class', 'profile-name'}).a['href']\n url_excel = url_new + '/export'\n browser.get(url_excel)\n except Exception as e:\n print(url, 'no excel')\n pass\n \n \n\nurls = [ 'https://www.privco.com/private-company/1135789',\n 'https://www.privco.com/private-company/542756',\n 'https://www.privco.com/private-company/137908',\n 'https://www.privco.com/private-company/137138']\n\nfor k, url in enumerate(urls):\n print(k)\n try:\n download_excel(url)\n except Exception as e:\n print(url, e)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pm4py/pm4py-core | notebooks/2_event_data_filtering.ipynb | gpl-3.0 | [
"Event Data Filtering\nby: Sebastiaan J. van Zelst\nLike any data-driven field, the successful application of process mining needs data munging and crunching.\nIn pm4py, you can munge and crunch your data in two ways, i.e., you can write lambda functions and apply them on\nyour event log, or, you can apply pre-built filtering and transformation functions.\nHence, in this turtorial, we briefly explain how to filter event data in various different ways in pm4py.\nGeneric Lambda Functions\nIn a nutshell, a lambda function allows you to specify a function that needs to be applied on a given element.\nAs a simple example, consider the following snippet:",
"f = lambda x: 2 * x\nf(5)",
"In the code, we assign a lambda function to variable f.\nThe function specifies that on each possible input it receives, the resulting function that is applied is a multiplication by 2.\nHence f(1)=2, f(2)=4, etc.\nNote that, invoking f only works if we provide an argument that can be combined with the * 2 operation.\nFor example, for strings, the * 2 operation concatenates the input argument with itself:",
"f('Pete')",
"Filter and Map\nLambda functions allow us to write short, type-independent functions.\nGiven a list of objects, Python provides two core functions that can apply a given lambda function on each element of\nthe given list (in fact, any iterable):\n\nfilter(f,l)\napply the given lambda function f as a filter on the iterable l.\nmap(f,l)\napply the given lambda function f as a transformation on the iterable l.\n\nFor more information, study the concept of ‘higher order functions’ in Python, e.g., as introduced here.\nLet's consider a few simple examples.",
"l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\nfilter(lambda n: n >= 5, l)",
"The previous example needs little to no explanation, i.e., the filter retains all numbers in the list greater or equal to five.\nHowever, what is interesting, is the fact that the resulting objects are not a list (or an iterables), rather a filter object.\nSuch an objects can be easily transformed to a list by wrapping it with a list() cast:",
"list(filter(lambda n: n >= 5, l))",
"The same holds for the map() function:",
"map(lambda n: n * 3, l)\n\nlist(map(lambda n: n * 3, l))",
"Observe that, the previous map function simply muliplies each element of list l by three.\nLambda-Based Filtering in pm4py\nIn pm4py, event log objects mimic lists of traces, which in turn, mimic lists of events.\nClearly, lambda functions can therefore be applied to event logs and traces.\nHowever, as we have shown in the previous example, after applying such a lamda-based filter, the resulting object is no longer an event log.\nFurthermore, casting a filter object or map object to an event log in pm4py is a bit more involved, i.e., it is\nnot so trivial as list(filter(...)) in the previous example.\nThis is due to the fact that various meta-data is stored in the event log object as well.\nTo this end, pm4py offers wrapper functions that make sure that after applying your higher-order function with a lambda function,\nthe resulting object is again an Event Log object.\nIn the upcoming scripts, we'll take a look at some lambda-based fitlering.\nFirst, let's inspect the length of each trace in our running example log by applying a generic map function",
"import pm4py\n\nlog = pm4py.read_xes('data/running_example.xes')\n# inspect the length of each trace using a generic map function\nlist(map(lambda t: len(t), log))",
"As we can see, there are four traces describing a trace of length 5, one trace of length 9 and one trace of length 13.\nLet's retain all traces that have a lenght greater than 5.",
"lf = pm4py.filter_log(lambda t: len(t) > 5, log)\nlist(map(lambda t: len(t), lf))",
"The traces of length 9 and 13 have repeated behavior in them, i.e., the reinitiate request activity has been performed at least once:",
"list(map(lambda t: (len(t), len(list(filter(lambda e: e['concept:name'] == 'reinitiate request', t)))), log))",
"Observe that the map function maps each trace onto a tuple.\nThe first element describes the length of the trace.\nThe second element describes the number of occurrences of the activity register request.\nObserve that we obtain said counter by filtering the trace, i.e., by retaining only those events that describe the\nreinitiate request activity and counting the length of the resulting list.\nNote that the traces describe a list of events, and, events are implementing a dictionary.\nIn this case, the activity name is captured by the concept:name attribute.\nIn general, PM4PY supports the following generic filtering functions:\n\npm4py.filter_log(f, log)\nfilter the log according to a function f.\npm4py.filter_trace(f,trace)\nfilter the trace according to function f.\npm4py.sort_log(log, key, reverse)\nsort the event log according to a given key, reversed order if reverse==True.\npm4py.sort_trace(trace, key, reverse)\nsort the trace according to a given key, reversed order if reverse==True.\n\nLet's see these functions in action:",
"print(len(log))\nlf = pm4py.filter_log(lambda t: len(t) > 5, log)\nprint(len(lf))\n\nprint(len(log[0])) #log[0] fetches the 1st trace\ntf = pm4py.filter_trace(lambda e: e['concept:name'] in {'register request', 'pay compensation'}, log[0])\nprint(len(tf))\n\nprint(len(log[0]))\nls = pm4py.sort_log(log, lambda t: len(t))\nprint(len(ls[0]))\nls = pm4py.sort_log(log, lambda t: len(t), reverse=True)\nprint(len(ls[0]))",
"Specific Filters\nThere are various pre-built filters in PM4Py, which make commonly needed process mining filtering functionality a lot easier.\nIn the upcoming overview, we briefly give present these functions.\nWe describe how to call them, their main input parameters and their return objects.\nNote that, all of the filters work on both DataFrames and pm4py event log objects.\nStart Activities\n\nfilter_start_activities(log, activities, retain=True)\nretains (or drops) the traces that contain the given activity as the final event.",
"pm4py.filter_start_activities(log, {'register request'})\n\npm4py.filter_start_activities(log, {'register request TYPO!'})\n\nimport pandas\n\nldf = pm4py.format_dataframe(pandas.read_csv('data/running_example.csv', sep=';'), case_id='case_id',\n activity_key='activity', timestamp_key='timestamp')\npm4py.filter_start_activities(ldf, {'register request'})\n\npm4py.filter_start_activities(ldf, {'register request TYPO!'})",
"End Activities\n\nfilter_end_activities(log, activities, retain=True)\nretains (or drops) the traces that contain the given activity as the final event.\n\nFor example, we can retain the number of cases that end with a \"payment of the compensation\":",
"len(pm4py.filter_end_activities(log, 'pay compensation'))",
"Event Attribute Values\n\nfilter_event_attribute_values(log, attribute_key, values, level=\"case\", retain=True)\nretains (or drops) traces (or events) based on a given collection of values that need to be matched for the\n given attribute_key. If level=='case', complete traces are matched (or dropped if retain==False) that\n have at least one event that describes a specifeid value for the given attribute. If level=='event', only events\n that match are retained (or dropped).",
"# retain any case that has either Peter or Mike working on it\nlf = pm4py.filter_event_attribute_values(log, 'org:resource', {'Pete', 'Mike'})\nlist(map(lambda t: list(map(lambda e: e['org:resource'], t)), lf))\n\n# retain only those events that have Pete or Mik working on it\nlf = pm4py.filter_event_attribute_values(log, 'org:resource', {'Pete', 'Mike'}, level='event')\nlist(map(lambda t: list(map(lambda e: e['org:resource'], t)), lf))\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jpallas/beakerx | doc/python/ChartingAPI.ipynb | apache-2.0 | [
"Python API to BeakerX Interactive Plotting\nYou can access Beaker's native interactive plotting library from Python.\nPlot with simple properties\nPython plots has syntax very similar to Groovy plots. Property names are the same.",
"from beakerx import *\nimport pandas as pd\n\ntableRows = pd.read_csv('../resources/data/interest-rates.csv')\n\nPlot(title=\"Title\",\n xLabel=\"Horizontal\",\n yLabel=\"Vertical\",\n initWidth=500,\n initHeight=200)",
"Plot items\nLines, Bars, Points and Right yAxis",
"x = [1, 4, 6, 8, 10]\ny = [3, 6, 4, 5, 9]\n\npp = Plot(title='Bars, Lines, Points and 2nd yAxis', \n xLabel=\"xLabel\", \n yLabel=\"yLabel\", \n legendLayout=LegendLayout.HORIZONTAL,\n legendPosition=LegendPosition(position=LegendPosition.Position.RIGHT),\n omitCheckboxes=True)\n\npp.add(YAxis(label=\"Right yAxis\"))\npp.add(Bars(displayName=\"Bar\", \n x=[1,3,5,7,10], \n y=[100, 120,90,100,80], \n width=1))\npp.add(Line(displayName=\"Line\", \n x=x, \n y=y, \n width=6, \n yAxis=\"Right yAxis\"))\npp.add(Points(x=x, \n y=y, \n size=10, \n shape=ShapeType.DIAMOND,\n yAxis=\"Right yAxis\"))\n\nplot = Plot(title= \"Setting line properties\")\nys = [0, 1, 6, 5, 2, 8]\nys2 = [0, 2, 7, 6, 3, 8]\nplot.add(Line(y= ys, width= 10, color= Color.red))\nplot.add(Line(y= ys, width= 3, color= Color.yellow))\nplot.add(Line(y= ys, width= 4, color= Color(33, 87, 141), style= StrokeType.DASH, interpolation= 0))\nplot.add(Line(y= ys2, width= 2, color= Color(212, 57, 59), style= StrokeType.DOT))\nplot.add(Line(y= [5, 0], x= [0, 5], style= StrokeType.LONGDASH))\nplot.add(Line(y= [4, 0], x= [0, 5], style= StrokeType.DASHDOT))\n\nplot = Plot(title= \"Changing Point Size, Color, Shape\")\ny1 = [6, 7, 12, 11, 8, 14]\ny2 = [4, 5, 10, 9, 6, 12]\ny3 = [2, 3, 8, 7, 4, 10]\ny4 = [0, 1, 6, 5, 2, 8]\nplot.add(Points(y= y1))\nplot.add(Points(y= y2, shape= ShapeType.CIRCLE))\nplot.add(Points(y= y3, size= 8.0, shape= ShapeType.DIAMOND))\nplot.add(Points(y= y4, size= 12.0, color= Color.orange, outlineColor= Color.red))\n\nplot = Plot(title= \"Changing point properties with list\")\ncs = [Color.black, Color.red, Color.orange, Color.green, Color.blue, Color.pink]\nss = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0]\nfs = [False, False, False, True, False, False]\nplot.add(Points(y= [5] * 6, size= 12.0, color= cs))\nplot.add(Points(y= [4] * 6, size= 12.0, color= Color.gray, outlineColor= cs))\nplot.add(Points(y= [3] * 6, size= ss, color= Color.red))\nplot.add(Points(y= [2] * 6, size= 12.0, color= Color.black, fill= fs, outlineColor= Color.black))\n\nplot = Plot()\ny1 = [1.5, 1, 6, 5, 2, 8]\ncs = [Color.black, Color.red, Color.gray, Color.green, Color.blue, Color.pink]\nss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH]\nplot.add(Stems(y= y1, color= cs, style= ss, width= 5))\n\nplot = Plot(title= \"Setting the base of Stems\")\nys = [3, 5, 2, 3, 7]\ny2s = [2.5, -1.0, 3.5, 2.0, 3.0]\nplot.add(Stems(y= ys, width= 2, base= y2s))\nplot.add(Points(y= ys))\n\nplot = Plot(title= \"Bars\")\ncs = [Color(255, 0, 0, 128)] * 5 # transparent bars\ncs[3] = Color.red # set color of a single bar, solid colored bar\nplot.add(Bars(x= [1, 2, 3, 4, 5], y= [3, 5, 2, 3, 7], color= cs, outlineColor= Color.black, width= 0.3))",
"Lines, Points with Pandas",
"plot = Plot(title= \"Pandas line\")\nplot.add(Line(y= tableRows.y1, width= 2, color= Color(216, 154, 54)))\nplot.add(Line(y= tableRows.y10, width= 2, color= Color.lightGray))\n\nplot\n\nplot = Plot(title= \"Pandas Series\")\nplot.add(Line(y= pd.Series([0, 6, 1, 5, 2, 4, 3]), width=2))\n\nplot = Plot(title= \"Bars\")\ncs = [Color(255, 0, 0, 128)] * 7 # transparent bars\ncs[3] = Color.red # set color of a single bar, solid colored bar\nplot.add(Bars(pd.Series([0, 6, 1, 5, 2, 4, 3]), color= cs, outlineColor= Color.black, width= 0.3))",
"Areas, Stems and Crosshair",
"ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT)\nplot = Plot(crosshair=ch)\ny1 = [4, 8, 16, 20, 32]\nbase = [2, 4, 8, 10, 16]\ncs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink]\nss = [StrokeType.SOLID, \n StrokeType.SOLID, \n StrokeType.DASH, \n StrokeType.DOT, \n StrokeType.DASHDOT, \n StrokeType.LONGDASH]\nplot.add(Area(y=y1, base=base, color=Color(255, 0, 0, 50)))\nplot.add(Stems(y=y1, base=base, color=cs, style=ss, width=5))\n\nplot = Plot()\ny = [3, 5, 2, 3]\nx0 = [0, 1, 2, 3]\nx1 = [3, 4, 5, 8]\nplot.add(Area(x= x0, y= y))\nplot.add(Area(x= x1, y= y, color= Color(128, 128, 128, 50), interpolation= 0))\n\np = Plot()\np.add(Line(y= [3, 6, 12, 24], displayName= \"Median\"))\np.add(Area(y= [4, 8, 16, 32], base= [2, 4, 8, 16],\n color= Color(255, 0, 0, 50), displayName= \"Q1 to Q3\"))\n\nch = Crosshair(color= Color(255, 128, 5), width= 2, style= StrokeType.DOT)\npp = Plot(crosshair= ch, omitCheckboxes= True,\n legendLayout= LegendLayout.HORIZONTAL, legendPosition= LegendPosition(position=LegendPosition.Position.TOP))\nx = [1, 4, 6, 8, 10]\ny = [3, 6, 4, 5, 9]\npp.add(Line(displayName= \"Line\", x= x, y= y, width= 3))\npp.add(Bars(displayName= \"Bar\", x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], y= [2, 2, 4, 4, 2, 2, 0, 2, 2, 4], width= 0.5))\npp.add(Points(x= x, y= y, size= 10))",
"Constant Lines, Constant Bands",
"p = Plot ()\np.add(Line(y=[-1, 1]))\np.add(ConstantLine(x=0.65, style=StrokeType.DOT, color=Color.blue))\np.add(ConstantLine(y=0.1, style=StrokeType.DASHDOT, color=Color.blue))\np.add(ConstantLine(x=0.3, y=0.4, color=Color.gray, width=5, showLabel=True))\n\nPlot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3]))\n\np = Plot() \np.add(Line(x= [-3, 1, 2, 4, 5], y= [4, 2, 6, 1, 5]))\np.add(ConstantBand(x= ['-Infinity', 1], color= Color(128, 128, 128, 50)))\np.add(ConstantBand(x= [1, 2]))\np.add(ConstantBand(x= [4, 'Infinity']))\n\nfrom decimal import Decimal\npos_inf = Decimal('Infinity')\nneg_inf = Decimal('-Infinity')\nprint (pos_inf)\nprint (neg_inf)\n\n\nfrom beakerx.plot import Text as BeakerxText\nplot = Plot()\nxs = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\nys = [8.6, 6.1, 7.4, 2.5, 0.4, 0.0, 0.5, 1.7, 8.4, 1]\ndef label(i):\n if ys[i] > ys[i+1] and ys[i] > ys[i-1]:\n return \"max\"\n if ys[i] < ys[i+1] and ys[i] < ys[i-1]:\n return \"min\"\n if ys[i] > ys[i-1]:\n return \"rising\"\n if ys[i] < ys[i-1]:\n return \"falling\"\n return \"\"\n\nfor i in xs:\n i = i - 1\n if i > 0 and i < len(xs)-1:\n plot.add(BeakerxText(x= xs[i], y= ys[i], text= label(i), pointerAngle= -i/3.0))\n\nplot.add(Line(x= xs, y= ys))\nplot.add(Points(x= xs, y= ys))\n\nplot = Plot(title= \"Setting 2nd Axis bounds\")\nys = [0, 2, 4, 6, 15, 10]\nys2 = [-40, 50, 6, 4, 2, 0]\nys3 = [3, 6, 3, 6, 70, 6]\nplot.add(YAxis(label=\"Spread\"))\nplot.add(Line(y= ys))\nplot.add(Line(y= ys2, yAxis=\"Spread\"))\nplot.setXBound([-2, 10])\n#plot.setYBound(1, 5)\nplot.getYAxes()[0].setBound(1,5)\nplot.getYAxes()[1].setBound(3,6)\n\n\nplot\n\nplot = Plot(title= \"Setting 2nd Axis bounds\")\nys = [0, 2, 4, 6, 15, 10]\nys2 = [-40, 50, 6, 4, 2, 0]\nys3 = [3, 6, 3, 6, 70, 6]\nplot.add(YAxis(label=\"Spread\"))\nplot.add(Line(y= ys))\nplot.add(Line(y= ys2, yAxis=\"Spread\"))\nplot.setXBound([-2, 10])\nplot.setYBound(1, 5)\n\nplot",
"TimePlot",
"import time\n\nmillis = current_milli_time()\n\nhour = round(1000 * 60 * 60)\nxs = []\nys = []\nfor i in range(11):\n xs.append(millis + hour * i)\n ys.append(i)\n\nplot = TimePlot(timeZone=\"America/New_York\")\n# list of milliseconds\nplot.add(Points(x=xs, y=ys, size=10, displayName=\"milliseconds\"))\n\nplot = TimePlot()\nplot.add(Line(x=tableRows['time'], y=tableRows['m3']))",
"numpy datatime64",
"y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [np.datetime64('2015-02-01'), \n np.datetime64('2015-02-02'), \n np.datetime64('2015-02-03'),\n np.datetime64('2015-02-04'),\n np.datetime64('2015-02-05'),\n np.datetime64('2015-02-06')]\nplot = TimePlot()\n\nplot.add(Line(x=dates, y=y))",
"Timestamp",
"y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = pd.Series(['2015-02-01',\n '2015-02-02',\n '2015-02-03',\n '2015-02-04',\n '2015-02-05',\n '2015-02-06']\n , dtype='datetime64[ns]')\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))\n",
"Datetime and date",
"import datetime\n\ny = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [datetime.date(2015, 2, 1),\n datetime.date(2015, 2, 2),\n datetime.date(2015, 2, 3),\n datetime.date(2015, 2, 4),\n datetime.date(2015, 2, 5),\n datetime.date(2015, 2, 6)]\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))\n\n\nimport datetime\n\ny = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [datetime.datetime(2015, 2, 1),\n datetime.datetime(2015, 2, 2),\n datetime.datetime(2015, 2, 3),\n datetime.datetime(2015, 2, 4),\n datetime.datetime(2015, 2, 5),\n datetime.datetime(2015, 2, 6)]\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))",
"NanoPlot",
"millis = current_milli_time()\nnanos = millis * 1000 * 1000\nxs = []\nys = []\nfor i in range(11):\n xs.append(nanos + 7 * i)\n ys.append(i)\n\nnanoplot = NanoPlot()\nnanoplot.add(Points(x=xs, y=ys))",
"Stacking",
"y1 = [1,5,3,2,3]\ny2 = [7,2,4,1,3]\np = Plot(title='Plot with XYStacker', initHeight=200)\na1 = Area(y=y1, displayName='y1')\na2 = Area(y=y2, displayName='y2')\nstacker = XYStacker()\np.add(stacker.stack([a1, a2]))",
"SimpleTime Plot",
"SimpleTimePlot(tableRows, [\"y1\", \"y10\"], # column names\n timeColumn=\"time\", # time is default value for a timeColumn\n yLabel=\"Price\", \n displayNames=[\"1 Year\", \"10 Year\"],\n colors = [[216, 154, 54], Color.lightGray],\n displayLines=True, # no lines (true by default)\n displayPoints=False) # show points (false by default))\n\n#time column base on DataFrame index \ntableRows.index = tableRows['time']\n\nSimpleTimePlot(tableRows, ['m3'])\n\nrng = pd.date_range('1/1/2011', periods=72, freq='H')\nts = pd.Series(np.random.randn(len(rng)), index=rng)\ndf = pd.DataFrame(ts, columns=['y'])\nSimpleTimePlot(df, ['y'])\n",
"Second Y Axis\nThe plot can have two y-axes. Just add a YAxis to the plot object, and specify its label.\nThen for data that should be scaled according to this second axis,\nspecify the property yAxis with a value that coincides with the label given.\nYou can use upperMargin and lowerMargin to restrict the range of the data leaving more white, perhaps for the data on the other axis.",
"p = TimePlot(xLabel= \"Time\", yLabel= \"Interest Rates\")\np.add(YAxis(label= \"Spread\", upperMargin= 4))\np.add(Area(x= tableRows.time, y= tableRows.spread, displayName= \"Spread\",\n yAxis= \"Spread\", color= Color(180, 50, 50, 128)))\np.add(Line(x= tableRows.time, y= tableRows.m3, displayName= \"3 Month\"))\np.add(Line(x= tableRows.time, y= tableRows.y10, displayName= \"10 Year\"))",
"Combined Plot",
"import math\npoints = 100\nlogBase = 10\nexpys = []\nxs = []\nfor i in range(0, points):\n xs.append(i / 15.0)\n expys.append(math.exp(xs[i]))\n\n\ncplot = CombinedPlot(xLabel= \"Linear\")\nlogYPlot = Plot(title= \"Linear x, Log y\", yLabel= \"Log\", logY= True, yLogBase= logBase)\nlogYPlot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nlogYPlot.add(Line(x= xs, y= xs, displayName= \"g(x) = x\"))\ncplot.add(logYPlot, 4)\n\nlinearYPlot = Plot(title= \"Linear x, Linear y\", yLabel= \"Linear\")\nlinearYPlot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nlinearYPlot.add(Line(x= xs, y= xs, displayName= \"g(x) = x\"))\ncplot.add(linearYPlot,4)\n\ncplot\n\n\nplot = Plot(title= \"Log x, Log y\", xLabel= \"Log\", yLabel= \"Log\",\n logX= True, xLogBase= logBase, logY= True, yLogBase= logBase)\n\nplot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nplot.add(Line(x= xs, y= xs, displayName= \"f(x) = x\"))\n\nplot"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LimeeZ/phys292-2015-work | assignments/assignment11/OptimizationEx01.ipynb | mit | [
"Optimization Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt",
"Hat potential\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n$$ V(x) = -a x^2 + b x^4 $$\nWrite a function hat(x,a,b) that returns the value of this function:",
"def hat(x,a,b):\n v = -a*x**2 + b*x**4\n return v\n \n\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0",
"Plot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:",
"a = 5.0\nb = 1.0\n\nx1 = np.arange(-3,3,0.1)\nplt.plot(x1, hat(x1, 5,1))\n\nassert True # leave this to grade the plot",
"Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\nUse scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.\nPrint the x values of the minima.\nPlot the function as a blue line.\nOn the same axes, show the minima as red circles.\nCustomize your visualization to make it beatiful and effective.",
"def hat(x):\n b = 1\n a = 5\n v = -a*x**2 + b*x**4\n return v\n\nxmin1 = opt.minimize(hat,-1.5)['x'][0]\nxmin2 = opt.minimize(hat,1.5)['x'][0]\nxmins = np.array([xmin1,xmin2])\n\nprint(xmin1)\nprint(xmin2)\n\nx1 = np.arange(-3,3,0.1)\nplt.plot(x1, hat(x1))\nplt.scatter(xmins,hat(xmins), c = 'r',marker = 'o')\nplt.grid(True)\nplt.title('Hat Potential')\nplt.xlabel('Range')\nplt.ylabel('Potential')\n\nassert True # leave this for grading the plot",
"To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\n$$\nV(x) = -a x^2 + b x^4 \\\nV'(x) = -2ax + 4bx^3 \\\nV'(x) = x (-2a + 4bx^2)\\\n4x^2 - 10 = 2(2x^2 - 5) \\ \n2(2x^2 - 5) = - \\sqrt{10}-2x , \\sqrt{10}+2 x \\\n$$\nThe minimums or maximums are at $$ x = \\frac{\\sqrt{10}}{2}, x = \\frac{-\\sqrt{10}}{2}, x = 0\\$$\nChecking to see if they are a minimum or a maximum:\\\n$$\nV''(x) = -10 + 12x^2\\\nV''(0) = -10 + 12(0)^2 = -10\\\n$$\nx = 0 is a maxima.\n$$\nV''(\\frac{\\sqrt{10}}{2}) = -10 + 12(\\frac{\\sqrt{10}}{2})^2 = 350 \\\n$$\nThe x above is a minima.\n$$\nV''(\\frac{-\\sqrt{10}}{2}) = -10 + 12(\\frac{-\\sqrt{10}}{2})^2 = 350\\\n$$\nThe x above is a minima."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ecervera/mindstorms-nb | task/quadrat.ipynb | mit | [
"Exercici: fer un quadrat\n<img src=\"img/bart-simpson-chalkboard.jpg\" align=\"right\" width=250>\nA partir de les instruccions dels moviments bàsics, heu de fer un programa per a que el robot avance i gire 90 graus, de manera de faça una trajectòria quadrada.\nL'estratègia és simple: repetiu quatre vegades el codi necessari per a fer avançar el robot un temps, i girar (a l'esquerra o a la dreta).\nAbans que res, no oblideu connectar-vos al robot!",
"from functions import connect, forward, stop, left, right, disconnect, next_notebook\nfrom time import sleep\n\nconnect() # Executeu, polsant Majúscules + Enter",
"Programa principal\nSubstituïu els comentaris per les ordres necessàries:",
"# avançar\n# girar\n# avançar\n# girar\n# avançar\n# girar\n# avançar\n# girar\n# parar",
"Ha funcionat a la primera? Fer un quadrat perfecte no és fàcil, i el més normal és que calga ajustar un parell de coses:\n\n\nel gir de 90 graus: si el robot gira massa, heu de disminuir el temps del sleep; si gira massa poc, augmentar-lo (podeu posar decimals)\n\n\nsi no va recte: és normal que un dels motors gire una mica més ràpid que l'altre; podeu ajustar les velocitats de cada motor individualment entre 0 (mínim) i 100 (màxim), per exemple:\nforward(speed_B=90,speed_C=75)\n\n\nCanvieu els valors i torneu a provar fins aconseguir un quadrat decent (la perfecció és impossible).\n\nVersió pro\nEls llenguatges de programació tenen estructures per a repetir blocs d'instruccions sense haver d'escriure-les tantes vegades. És el que s'anomena bucle o, en anglès, for loop.\nEn Python, un bucle per a repetir un bloc d'instruccions quatre vegades s'escriu així:",
"for i in range(4):\n # avançar\n # girar\n# parar",
"És important que les instruccions de dins del bucle estiguen desplaçades cap a la dreta, és a dir indentades.\nSubstituïu els comentaris per les instruccions i proveu.\n\nRecapitulem\nPer a acabar l'exercici, i abans de passar a la següent pàgina, desconnecteu el robot:",
"disconnect()\nnext_notebook('sensors')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JamesSample/icpw | correct_toc_elev.ipynb | mit | [
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport imp\nfrom sqlalchemy import create_engine",
"TOC and elevation corrections\nSome further changes to the ICPW trends analysis are required:\n\n\nHeleen has discovered some strange results for TOC for some of the Canadian sites (see e-mail received 14/03/2017 at 17.45) <br><br>\n\n\nWe now have elevation data for the remaining sites (see e-mail received 15/03/2017 at 08.37) <br><br>\n\n\nHeleen would like a \"grid cell ID\" adding to the climate processing output (see e-mail received 15/03/2017 13.33)\n\n\nHaving made the above changes, the whole climate data and trends analysis needs re-running. This notebook deals with points 1 and 2 above; point 3 requires a small modification to the existing climate code.\n1. Correct TOC\nThis is a bit more complicated than it first appears. It looks as though a lot of dupicate data was uploaded to the database at some point, and some of the duplicates have incorrect method names. For the Ontairo lakes, the same values have been uploaded both as DOC (in mg-C/l) and as \"DOCx\", which is in umol-C/l. The conversion factor from DOCx to DOC is therefore 0.012, which is very close to Heleen's estimated correction factor of dividing by 100. The problem is that the database appears to be selecting which values to display more-or-less at random. This is illustrated below.",
"# Create db connection\nr2_func_path = r'C:\\Data\\James_Work\\Staff\\Heleen_d_W\\ICP_Waters\\Upload_Template\\useful_resa2_code.py'\nresa2 = imp.load_source('useful_resa2_code', r2_func_path)\n\nengine, conn = resa2.connect_to_resa2()\n\n# Get example data\nsql = (\"SELECT * FROM resa2.water_chemistry_values2 \"\n \"WHERE sample_id = (SELECT water_sample_id \"\n \"FROM resa2.water_samples \"\n \"WHERE station_id = 23466 \"\n \"AND sample_date = DATE '2000-05-23') \"\n \"AND method_id IN (10313, 10294)\")\n\ndf = pd.read_sql_query(sql, engine)\n\ndf",
"method_id=10294 is DOC in mg-C/l, whereas method_id=10313 is DOCx in umol-C/l. Both were uploaded within the space of a few weeks back in 2006. I assume that the values with method_id=10313 are correct, and those with method_id=10294 are wrong. \nIt seems as though, when both methods are present, RESA2 preferentially chooses method_id=10313, which is why most of the data look OK. However, if method_id=10313 is not available, the database uses the values for method_id=10294 instead, and these values are wrong. The problem is that this selection isn't deliberate: the database only prefers method_id=10313 because it appears lower in the table than method_id=10294. Essentially, it's just a fluke that most of the data turn out OK - it could easily have been the other way around.\nTo fix this, I need to:\n\n\nGo through all the samples from the Ontario sites and see whether there are values for both method_id=10313 and method_id=10294 <br><br>\n\n\nIf yes, see whether the raw values are the same. If so, delete the value for method_id=10294 <br><br>\n\n\nIf values are only entered with method_id=10294, check to see whether they are too large and, if so, switch the method_id to 10313\n\n\nThis is done below.",
"# Get a list of all water samples associated with\n# stations in the 'ICPW_TOCTRENDS_2015_CA_ICPW' project\nsql = (\"SELECT water_sample_id FROM resa2.water_samples \"\n \"WHERE station_id IN ( \"\n \"SELECT station_id FROM resa2.stations \"\n \"WHERE station_id IN ( \"\n \"SELECT station_id FROM resa2.projects_stations \"\n \"WHERE project_id IN ( \"\n \"SELECT project_id FROM resa2.projects \"\n \"WHERE project_name = 'ICPW_TOCTRENDS_2015_CA_ICPW')))\")\n \nsamp_df = pd.read_sql_query(sql, engine)\n\n# Loop over samples and check whether both method_ids are present\nfor samp_id in samp_df['water_sample_id'].values:\n # Get data for this sample\n sql = (\"SELECT method_id, value \"\n \"FROM resa2.water_chemistry_values2 \"\n \"WHERE sample_id = %s \"\n \"AND method_id IN (10294, 10313)\" % samp_id)\n df = pd.read_sql_query(sql, engine)\n df.index = df['method_id']\n del df['method_id']\n \n # How many entries for DOC?\n if len(df) == 1:\n # We have just one of the two methods\n if df.index[0] == 10294:\n # Should be DOC in mg-C/l and values should be <50\n if df['value'].values[0] > 50:\n # Method_ID must be wrong\n sql = ('UPDATE resa2.water_chemistry_values2 '\n 'SET method_id = 10313 '\n 'WHERE sample_id = %s '\n 'AND method_id = 10294' % samp_id)\n result = conn.execute(sql)\n \n # Otherwise we have both methods\n elif len(df) == 2:\n # Are they the same and large?\n if (df.loc[10313].value == df.loc[10294].value) and (df.loc[10313].value > 50):\n # Delete record for method_id=10294\n sql = ('DELETE FROM resa2.water_chemistry_values2 '\n 'WHERE sample_id = %s '\n 'AND method_id = 10294' % samp_id)\n result = conn.execute(sql)\n\nprint 'Finished.'",
"2. Update station elevations\nHeleen has provided the missing elevation data, which I copied here:\nC:\\Data\\James_Work\\Staff\\Heleen_d_W\\ICP_Waters\\TOC_Trends_Analysis_2015\\CRU_Climate_Data\\missing_elev_data.xlsx",
"# Read elev data\nin_xlsx = (r'C:\\Data\\James_Work\\Staff\\Heleen_d_W\\ICP_Waters\\TOC_Trends_Analysis_2015'\n r'\\CRU_Climate_Data\\missing_elev_data.xlsx')\nelev_df = pd.read_excel(in_xlsx)\nelev_df.index = elev_df['station_id']\n\n# Loop over stations and update info\nfor stn_id in elev_df['station_id'].values:\n # Get elev\n elev = elev_df.loc[stn_id]['altitude']\n\n # Update rows\n sql = ('UPDATE resa2.stations '\n 'SET altitude = %s '\n 'WHERE station_id = %s' % (elev, stn_id))\n result = conn.execute(sql)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turbomanage/training-data-analyst | courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb | apache-2.0 | [
"Neural Network\nLearning Objectives:\n * Use the DNNRegressor class in TensorFlow to predict median housing price\nThe data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively.\n<p>\nLet's use a set of features to predict house value.\n\n## Set Up\nIn this first cell, we'll load the necessary libraries.",
"import math\nimport shutil\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\n\ntf.logging.set_verbosity(tf.logging.INFO)\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:.1f}'.format",
"Next, we'll load our data set.",
"df = pd.read_csv(\"https://storage.googleapis.com/ml_universities/california_housing_train.csv\", sep=\",\")",
"Examine the data\nIt's a good idea to get to know your data a little bit before you work with it.\nWe'll print out a quick summary of a few useful statistics on each column.\nThis will include things like mean, standard deviation, max, min, and various quantiles.",
"df.head()\n\ndf.describe()",
"This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well",
"df['num_rooms'] = df['total_rooms'] / df['households']\ndf['num_bedrooms'] = df['total_bedrooms'] / df['households']\ndf['persons_per_house'] = df['population'] / df['households']\ndf.describe()\n\ndf.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True)\ndf.describe()",
"Build a neural network model\nIn this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features.\nTo train our model, we'll first use the LinearRegressor interface. Then, we'll change to DNNRegressor",
"featcols = {\n colname : tf.feature_column.numeric_column(colname) \\\n for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',')\n}\n# Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons\nfeatcols['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'),\n np.linspace(-124.3, -114.3, 5).tolist())\nfeatcols['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'),\n np.linspace(32.5, 42, 10).tolist())\n\nfeatcols.keys()\n\n# Split into train and eval\nmsk = np.random.rand(len(df)) < 0.8\ntraindf = df[msk]\nevaldf = df[~msk]\n\nSCALE = 100000\nBATCH_SIZE= 100\nOUTDIR = './housing_trained'\ntrain_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(featcols.keys())],\n y = traindf[\"median_house_value\"] / SCALE,\n num_epochs = None,\n batch_size = BATCH_SIZE,\n shuffle = True)\neval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(featcols.keys())],\n y = evaldf[\"median_house_value\"] / SCALE, # note the scaling\n num_epochs = 1, \n batch_size = len(evaldf), \n shuffle=False)\n\n# Linear Regressor\ndef train_and_evaluate(output_dir, num_train_steps):\n myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate\n estimator = tf.estimator.LinearRegressor(\n model_dir = output_dir, \n feature_columns = featcols.values(),\n optimizer = myopt)\n \n #Add rmse evaluation metric\n def rmse(labels, predictions):\n pred_values = tf.cast(predictions['predictions'],tf.float64)\n return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}\n estimator = tf.contrib.estimator.add_metrics(estimator,rmse)\n \n train_spec=tf.estimator.TrainSpec(\n input_fn = train_input_fn,\n max_steps = num_train_steps)\n eval_spec=tf.estimator.EvalSpec(\n input_fn = eval_input_fn,\n steps = None,\n start_delay_secs = 1, # start evaluating after N seconds\n throttle_secs = 10, # evaluate every N seconds\n )\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n# Run training \nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntrain_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE) \n\n# DNN Regressor\ndef train_and_evaluate(output_dir, num_train_steps):\n myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate\n estimator = # TODO: Implement DNN Regressor model\n \n #Add rmse evaluation metric\n def rmse(labels, predictions):\n pred_values = tf.cast(predictions['predictions'],tf.float64)\n return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}\n estimator = tf.contrib.estimator.add_metrics(estimator,rmse)\n \n train_spec=tf.estimator.TrainSpec(\n input_fn = train_input_fn,\n max_steps = num_train_steps)\n eval_spec=tf.estimator.EvalSpec(\n input_fn = eval_input_fn,\n steps = None,\n start_delay_secs = 1, # start evaluating after N seconds\n throttle_secs = 10, # evaluate every N seconds\n )\n tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)\n\n# Run training \nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\ntf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file\ntrain_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE) "
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nproctor/phys202-2015-work | assignments/assignment06/DisplayEx01.ipynb | mit | [
"Display Exercise 1\nImports\nPut any needed imports needed to display rich output the following cell:",
"from IPython.display import Image\nfrom IPython.display import HTML\n\nassert True # leave this to grade the import statements",
"Basic rich display\nFind a Physics related image on the internet and display it in this notebook using the Image object.\n\nLoad it using the url argument to Image (don't upload the image to this server).\nMake sure the set the embed flag so the image is embedded in the notebook data.\nSet the width and height to 600px.",
"Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/6/6d/Particle2D.svg/320px-Particle2D.svg.png', embed=True, width = 600, height = 600)\n\n\nassert True # leave this to grade the image display",
"Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.",
"%%html\n<table>\n<tr>\n<th>Name</th>\n<th>Symbol</th>\n<th>Antiparticle</th>\n<th>Charge</th>\n<th>Mass</th>\n</tr>\n<tr>\n<th>Up</th>\n<td>u</td>\n<td>$\\bar{u}$</td>\n<td>+2/3</td>\n<td>1.5-3.3</td>\n</tr>\n<tr>\n<th>Down</th>\n<td>d</td>\n<td>$\\bar{d}$</td>\n<td>-1/3</td>\n<td>3.5-6.0</td>\n</tr>\n<tr>\n<th>Charm</th>\n<td>c</td>\n<td>$\\bar{c}$</td>\n<td>+2/3</td>\n<td>1,160-1,340</td>\n</tr>\n<tr>\n<th>Strange</th>\n<td>s</td>\n<td>$\\bar{s}$</td>\n<td>-1/3</td>\n<td>70-130</td>\n</tr>\n<tr>\n<th>Top</th>\n<td>t</td>\n<td>$\\bar{t}$</td>\n<td>+2/3</td>\n<td>169,100-173,300</td>\n</tr>\n<tr>\n<th>Bottom</th>\n<td>b</td>\n<td>$\\bar{b}$</td>\n<td>-1/3</td>\n<td>4,130-4,370</td>\n</tr>\n\n</table>\n\nassert True # leave this here to grade the quark table"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wy1iu/sphereface | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | mit | [
"Fine-tuning a Pretrained Network for Style Recognition\nIn this example, we'll explore a common approach that is particularly useful in real-world applications: take a pre-trained Caffe network and fine-tune the parameters on your custom data.\nThe advantage of this approach is that, since pre-trained networks are learned on a large set of images, the intermediate layers capture the \"semantics\" of the general visual appearance. Think of it as a very powerful generic visual feature that you can treat as a black box. On top of that, only a relatively small amount of data is needed for good performance on the target task.\nFirst, we will need to prepare the data. This involves the following parts:\n(1) Get the ImageNet ilsvrc pretrained model with the provided shell scripts.\n(2) Download a subset of the overall Flickr style dataset for this demo.\n(3) Compile the downloaded Flickr dataset into a database that Caffe can then consume.",
"caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line)\n\nimport sys\nsys.path.insert(0, caffe_root + 'python')\nimport caffe\n\ncaffe.set_device(0)\ncaffe.set_mode_gpu()\n\nimport numpy as np\nfrom pylab import *\n%matplotlib inline\nimport tempfile\n\n# Helper function for deprocessing preprocessed images, e.g., for display.\ndef deprocess_net_image(image):\n image = image.copy() # don't modify destructively\n image = image[::-1] # BGR -> RGB\n image = image.transpose(1, 2, 0) # CHW -> HWC\n image += [123, 117, 104] # (approximately) undo mean subtraction\n\n # clamp values in [0, 255]\n image[image < 0], image[image > 255] = 0, 255\n\n # round and cast from float32 to uint8\n image = np.round(image)\n image = np.require(image, dtype=np.uint8)\n\n return image",
"1. Setup and dataset download\nDownload data required for this exercise.\n\nget_ilsvrc_aux.sh to download the ImageNet data mean, labels, etc.\ndownload_model_binary.py to download the pretrained reference model\nfinetune_flickr_style/assemble_data.py downloads the style training and testing data\n\nWe'll download just a small subset of the full dataset for this exercise: just 2000 of the 80K images, from 5 of the 20 style categories. (To download the full dataset, set full_dataset = True in the cell below.)",
"# Download just a small subset of the data for this exercise.\n# (2000 of 80K images, 5 of 20 labels.)\n# To download the entire dataset, set `full_dataset = True`.\nfull_dataset = False\nif full_dataset:\n NUM_STYLE_IMAGES = NUM_STYLE_LABELS = -1\nelse:\n NUM_STYLE_IMAGES = 2000\n NUM_STYLE_LABELS = 5\n\n# This downloads the ilsvrc auxiliary data (mean file, etc),\n# and a subset of 2000 images for the style recognition task.\nimport os\nos.chdir(caffe_root) # run scripts from caffe root\n!data/ilsvrc12/get_ilsvrc_aux.sh\n!scripts/download_model_binary.py models/bvlc_reference_caffenet\n!python examples/finetune_flickr_style/assemble_data.py \\\n --workers=-1 --seed=1701 \\\n --images=$NUM_STYLE_IMAGES --label=$NUM_STYLE_LABELS\n# back to examples\nos.chdir('examples')",
"Define weights, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists.",
"import os\nweights = os.path.join(caffe_root, 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\nassert os.path.exists(weights)",
"Load the 1000 ImageNet labels from ilsvrc12/synset_words.txt, and the 5 style labels from finetune_flickr_style/style_names.txt.",
"# Load ImageNet labels to imagenet_labels\nimagenet_label_file = caffe_root + 'data/ilsvrc12/synset_words.txt'\nimagenet_labels = list(np.loadtxt(imagenet_label_file, str, delimiter='\\t'))\nassert len(imagenet_labels) == 1000\nprint 'Loaded ImageNet labels:\\n', '\\n'.join(imagenet_labels[:10] + ['...'])\n\n# Load style labels to style_labels\nstyle_label_file = caffe_root + 'examples/finetune_flickr_style/style_names.txt'\nstyle_labels = list(np.loadtxt(style_label_file, str, delimiter='\\n'))\nif NUM_STYLE_LABELS > 0:\n style_labels = style_labels[:NUM_STYLE_LABELS]\nprint '\\nLoaded style labels:\\n', ', '.join(style_labels)",
"2. Defining and running the nets\nWe'll start by defining caffenet, a function which initializes the CaffeNet architecture (a minor variant on AlexNet), taking arguments specifying the data and number of output classes.",
"from caffe import layers as L\nfrom caffe import params as P\n\nweight_param = dict(lr_mult=1, decay_mult=1)\nbias_param = dict(lr_mult=2, decay_mult=0)\nlearned_param = [weight_param, bias_param]\n\nfrozen_param = [dict(lr_mult=0)] * 2\n\ndef conv_relu(bottom, ks, nout, stride=1, pad=0, group=1,\n param=learned_param,\n weight_filler=dict(type='gaussian', std=0.01),\n bias_filler=dict(type='constant', value=0.1)):\n conv = L.Convolution(bottom, kernel_size=ks, stride=stride,\n num_output=nout, pad=pad, group=group,\n param=param, weight_filler=weight_filler,\n bias_filler=bias_filler)\n return conv, L.ReLU(conv, in_place=True)\n\ndef fc_relu(bottom, nout, param=learned_param,\n weight_filler=dict(type='gaussian', std=0.005),\n bias_filler=dict(type='constant', value=0.1)):\n fc = L.InnerProduct(bottom, num_output=nout, param=param,\n weight_filler=weight_filler,\n bias_filler=bias_filler)\n return fc, L.ReLU(fc, in_place=True)\n\ndef max_pool(bottom, ks, stride=1):\n return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)\n\ndef caffenet(data, label=None, train=True, num_classes=1000,\n classifier_name='fc8', learn_all=False):\n \"\"\"Returns a NetSpec specifying CaffeNet, following the original proto text\n specification (./models/bvlc_reference_caffenet/train_val.prototxt).\"\"\"\n n = caffe.NetSpec()\n n.data = data\n param = learned_param if learn_all else frozen_param\n n.conv1, n.relu1 = conv_relu(n.data, 11, 96, stride=4, param=param)\n n.pool1 = max_pool(n.relu1, 3, stride=2)\n n.norm1 = L.LRN(n.pool1, local_size=5, alpha=1e-4, beta=0.75)\n n.conv2, n.relu2 = conv_relu(n.norm1, 5, 256, pad=2, group=2, param=param)\n n.pool2 = max_pool(n.relu2, 3, stride=2)\n n.norm2 = L.LRN(n.pool2, local_size=5, alpha=1e-4, beta=0.75)\n n.conv3, n.relu3 = conv_relu(n.norm2, 3, 384, pad=1, param=param)\n n.conv4, n.relu4 = conv_relu(n.relu3, 3, 384, pad=1, group=2, param=param)\n n.conv5, n.relu5 = conv_relu(n.relu4, 3, 256, pad=1, group=2, param=param)\n n.pool5 = max_pool(n.relu5, 3, stride=2)\n n.fc6, n.relu6 = fc_relu(n.pool5, 4096, param=param)\n if train:\n n.drop6 = fc7input = L.Dropout(n.relu6, in_place=True)\n else:\n fc7input = n.relu6\n n.fc7, n.relu7 = fc_relu(fc7input, 4096, param=param)\n if train:\n n.drop7 = fc8input = L.Dropout(n.relu7, in_place=True)\n else:\n fc8input = n.relu7\n # always learn fc8 (param=learned_param)\n fc8 = L.InnerProduct(fc8input, num_output=num_classes, param=learned_param)\n # give fc8 the name specified by argument `classifier_name`\n n.__setattr__(classifier_name, fc8)\n if not train:\n n.probs = L.Softmax(fc8)\n if label is not None:\n n.label = label\n n.loss = L.SoftmaxWithLoss(fc8, n.label)\n n.acc = L.Accuracy(fc8, n.label)\n # write the net to a temporary file and return its filename\n with tempfile.NamedTemporaryFile(delete=False) as f:\n f.write(str(n.to_proto()))\n return f.name",
"Now, let's create a CaffeNet that takes unlabeled \"dummy data\" as input, allowing us to set its input images externally and see what ImageNet classes it predicts.",
"dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227]))\nimagenet_net_filename = caffenet(data=dummy_data, train=False)\nimagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)",
"Define a function style_net which calls caffenet on data from the Flickr style dataset.\nThe new network will also have the CaffeNet architecture, with differences in the input and output:\n\nthe input is the Flickr style data we downloaded, provided by an ImageData layer\nthe output is a distribution over 20 classes rather than the original 1000 ImageNet classes\nthe classification layer is renamed from fc8 to fc8_flickr to tell Caffe not to load the original classifier (fc8) weights from the ImageNet-pretrained model",
"def style_net(train=True, learn_all=False, subset=None):\n if subset is None:\n subset = 'train' if train else 'test'\n source = caffe_root + 'data/flickr_style/%s.txt' % subset\n transform_param = dict(mirror=train, crop_size=227,\n mean_file=caffe_root + 'data/ilsvrc12/imagenet_mean.binaryproto')\n style_data, style_label = L.ImageData(\n transform_param=transform_param, source=source,\n batch_size=50, new_height=256, new_width=256, ntop=2)\n return caffenet(data=style_data, label=style_label, train=train,\n num_classes=NUM_STYLE_LABELS,\n classifier_name='fc8_flickr',\n learn_all=learn_all)",
"Use the style_net function defined above to initialize untrained_style_net, a CaffeNet with input images from the style dataset and weights from the pretrained ImageNet model.\nCall forward on untrained_style_net to get a batch of style training data.",
"untrained_style_net = caffe.Net(style_net(train=False, subset='train'),\n weights, caffe.TEST)\nuntrained_style_net.forward()\nstyle_data_batch = untrained_style_net.blobs['data'].data.copy()\nstyle_label_batch = np.array(untrained_style_net.blobs['label'].data, dtype=np.int32)",
"Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through imagenet_net, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes.\nBelow we chose an image where the network's predictions happen to be reasonable, as the image is of a beach, and \"sandbar\" and \"seashore\" both happen to be ImageNet-1000 categories. For other images, the predictions won't be this good, sometimes due to the network actually failing to recognize the object(s) present in the image, but perhaps even more often due to the fact that not all images contain an object from the (somewhat arbitrarily chosen) 1000 ImageNet categories. Modify the batch_index variable by changing its default setting of 8 to another value from 0-49 (since the batch size is 50) to see predictions for other images in the batch. (To go beyond this batch of 50 images, first rerun the above cell to load a fresh batch of data into style_net.)",
"def disp_preds(net, image, labels, k=5, name='ImageNet'):\n input_blob = net.blobs['data']\n net.blobs['data'].data[0, ...] = image\n probs = net.forward(start='conv1')['probs'][0]\n top_k = (-probs).argsort()[:k]\n print 'top %d predicted %s labels =' % (k, name)\n print '\\n'.join('\\t(%d) %5.2f%% %s' % (i+1, 100*probs[p], labels[p])\n for i, p in enumerate(top_k))\n\ndef disp_imagenet_preds(net, image):\n disp_preds(net, image, imagenet_labels, name='ImageNet')\n\ndef disp_style_preds(net, image):\n disp_preds(net, image, style_labels, name='style')\n\nbatch_index = 8\nimage = style_data_batch[batch_index]\nplt.imshow(deprocess_net_image(image))\nprint 'actual label =', style_labels[style_label_batch[batch_index]]\n\ndisp_imagenet_preds(imagenet_net, image)",
"We can also look at untrained_style_net's predictions, but we won't see anything interesting as its classifier hasn't been trained yet.\nIn fact, since we zero-initialized the classifier (see caffenet definition -- no weight_filler is passed to the final InnerProduct layer), the softmax inputs should be all zero and we should therefore see a predicted probability of 1/N for each label (for N labels). Since we set N = 5, we get a predicted probability of 20% for each class.",
"disp_style_preds(untrained_style_net, image)",
"We can also verify that the activations in layer fc7 immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the conv1 through fc7 layers.",
"diff = untrained_style_net.blobs['fc7'].data[0] - imagenet_net.blobs['fc7'].data[0]\nerror = (diff ** 2).sum()\nassert error < 1e-8",
"Delete untrained_style_net to save memory. (Hang on to imagenet_net as we'll use it again later.)",
"del untrained_style_net",
"3. Training the style classifier\nNow, we'll define a function solver to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and \"snapshotting\" -- see the inline comments for explanations of what they mean. You may want to play with some of the learning parameters to see if you can improve on the results here!",
"from caffe.proto import caffe_pb2\n\ndef solver(train_net_path, test_net_path=None, base_lr=0.001):\n s = caffe_pb2.SolverParameter()\n\n # Specify locations of the train and (maybe) test networks.\n s.train_net = train_net_path\n if test_net_path is not None:\n s.test_net.append(test_net_path)\n s.test_interval = 1000 # Test after every 1000 training iterations.\n s.test_iter.append(100) # Test on 100 batches each time we test.\n\n # The number of iterations over which to average the gradient.\n # Effectively boosts the training batch size by the given factor, without\n # affecting memory utilization.\n s.iter_size = 1\n \n s.max_iter = 100000 # # of times to update the net (training iterations)\n \n # Solve using the stochastic gradient descent (SGD) algorithm.\n # Other choices include 'Adam' and 'RMSProp'.\n s.type = 'SGD'\n\n # Set the initial learning rate for SGD.\n s.base_lr = base_lr\n\n # Set `lr_policy` to define how the learning rate changes during training.\n # Here, we 'step' the learning rate by multiplying it by a factor `gamma`\n # every `stepsize` iterations.\n s.lr_policy = 'step'\n s.gamma = 0.1\n s.stepsize = 20000\n\n # Set other SGD hyperparameters. Setting a non-zero `momentum` takes a\n # weighted average of the current gradient and previous gradients to make\n # learning more stable. L2 weight decay regularizes learning, to help prevent\n # the model from overfitting.\n s.momentum = 0.9\n s.weight_decay = 5e-4\n\n # Display the current training loss and accuracy every 1000 iterations.\n s.display = 1000\n\n # Snapshots are files used to store networks we've trained. Here, we'll\n # snapshot every 10K iterations -- ten times during training.\n s.snapshot = 10000\n s.snapshot_prefix = caffe_root + 'models/finetune_flickr_style/finetune_flickr_style'\n \n # Train on the GPU. Using the CPU to train large networks is very slow.\n s.solver_mode = caffe_pb2.SolverParameter.GPU\n \n # Write the solver to a temporary file and return its filename.\n with tempfile.NamedTemporaryFile(delete=False) as f:\n f.write(str(s))\n return f.name",
"Now we'll invoke the solver to train the style net's classification layer.\nFor the record, if you want to train the network using only the command line tool, this is the command:\n<code>\nbuild/tools/caffe train \\\n -solver models/finetune_flickr_style/solver.prototxt \\\n -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \\\n -gpu 0\n</code>\nHowever, we will train using Python in this example.\nWe'll first define run_solvers, a function that takes a list of solvers and steps each one in a round robin manner, recording the accuracy and loss values each iteration. At the end, the learned weights are saved to a file.",
"def run_solvers(niter, solvers, disp_interval=10):\n \"\"\"Run solvers for niter iterations,\n returning the loss and accuracy recorded each iteration.\n `solvers` is a list of (name, solver) tuples.\"\"\"\n blobs = ('loss', 'acc')\n loss, acc = ({name: np.zeros(niter) for name, _ in solvers}\n for _ in blobs)\n for it in range(niter):\n for name, s in solvers:\n s.step(1) # run a single SGD step in Caffe\n loss[name][it], acc[name][it] = (s.net.blobs[b].data.copy()\n for b in blobs)\n if it % disp_interval == 0 or it + 1 == niter:\n loss_disp = '; '.join('%s: loss=%.3f, acc=%2d%%' %\n (n, loss[n][it], np.round(100*acc[n][it]))\n for n, _ in solvers)\n print '%3d) %s' % (it, loss_disp) \n # Save the learned weights from both nets.\n weight_dir = tempfile.mkdtemp()\n weights = {}\n for name, s in solvers:\n filename = 'weights.%s.caffemodel' % name\n weights[name] = os.path.join(weight_dir, filename)\n s.net.save(weights[name])\n return loss, acc, weights",
"Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (style_solver) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the copy_from method), and the other (scratch_style_solver) will start from a randomly initialized net.\nDuring training, we should see that the ImageNet pretrained net is learning faster and attaining better accuracies than the scratch net.",
"niter = 200 # number of iterations to train\n\n# Reset style_solver as before.\nstyle_solver_filename = solver(style_net(train=True))\nstyle_solver = caffe.get_solver(style_solver_filename)\nstyle_solver.net.copy_from(weights)\n\n# For reference, we also create a solver that isn't initialized from\n# the pretrained ImageNet weights.\nscratch_style_solver_filename = solver(style_net(train=True))\nscratch_style_solver = caffe.get_solver(scratch_style_solver_filename)\n\nprint 'Running solvers for %d iterations...' % niter\nsolvers = [('pretrained', style_solver),\n ('scratch', scratch_style_solver)]\nloss, acc, weights = run_solvers(niter, solvers)\nprint 'Done.'\n\ntrain_loss, scratch_train_loss = loss['pretrained'], loss['scratch']\ntrain_acc, scratch_train_acc = acc['pretrained'], acc['scratch']\nstyle_weights, scratch_style_weights = weights['pretrained'], weights['scratch']\n\n# Delete solvers to save memory.\ndel style_solver, scratch_style_solver, solvers",
"Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer.",
"plot(np.vstack([train_loss, scratch_train_loss]).T)\nxlabel('Iteration #')\nylabel('Loss')\n\nplot(np.vstack([train_acc, scratch_train_acc]).T)\nxlabel('Iteration #')\nylabel('Accuracy')",
"Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to be much better than the one from training from scratch. Let's see.",
"def eval_style_net(weights, test_iters=10):\n test_net = caffe.Net(style_net(train=False), weights, caffe.TEST)\n accuracy = 0\n for it in xrange(test_iters):\n accuracy += test_net.forward()['acc']\n accuracy /= test_iters\n return test_net, accuracy\n\ntest_net, accuracy = eval_style_net(style_weights)\nprint 'Accuracy, trained from ImageNet initialization: %3.1f%%' % (100*accuracy, )\nscratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights)\nprint 'Accuracy, trained from random initialization: %3.1f%%' % (100*scratch_accuracy, )",
"4. End-to-end finetuning for style\nFinally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights \"end-to-end\" by turning on learning in all layers of the network, starting from the RGB conv1 filters directly applied to the input image. We pass the argument learn_all=True to the style_net function defined earlier in this notebook, which tells the function to apply a positive (non-zero) lr_mult value for all parameters. Under the default, learn_all=False, all parameters in the pretrained layers (conv1 through fc7) are frozen (lr_mult = 0), and we learn only the classifier layer fc8_flickr.\nNote that both networks start at roughly the accuracy achieved at the end of the previous training session, and improve significantly with end-to-end training. To be more scientific, we'd also want to follow the same additional training procedure without the end-to-end training, to ensure that our results aren't better simply because we trained for twice as long. Feel free to try this yourself!",
"end_to_end_net = style_net(train=True, learn_all=True)\n\n# Set base_lr to 1e-3, the same as last time when learning only the classifier.\n# You may want to play around with different values of this or other\n# optimization parameters when fine-tuning. For example, if learning diverges\n# (e.g., the loss gets very large or goes to infinity/NaN), you should try\n# decreasing base_lr (e.g., to 1e-4, then 1e-5, etc., until you find a value\n# for which learning does not diverge).\nbase_lr = 0.001\n\nstyle_solver_filename = solver(end_to_end_net, base_lr=base_lr)\nstyle_solver = caffe.get_solver(style_solver_filename)\nstyle_solver.net.copy_from(style_weights)\n\nscratch_style_solver_filename = solver(end_to_end_net, base_lr=base_lr)\nscratch_style_solver = caffe.get_solver(scratch_style_solver_filename)\nscratch_style_solver.net.copy_from(scratch_style_weights)\n\nprint 'Running solvers for %d iterations...' % niter\nsolvers = [('pretrained, end-to-end', style_solver),\n ('scratch, end-to-end', scratch_style_solver)]\n_, _, finetuned_weights = run_solvers(niter, solvers)\nprint 'Done.'\n\nstyle_weights_ft = finetuned_weights['pretrained, end-to-end']\nscratch_style_weights_ft = finetuned_weights['scratch, end-to-end']\n\n# Delete solvers to save memory.\ndel style_solver, scratch_style_solver, solvers",
"Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or randomly initialized weights).",
"test_net, accuracy = eval_style_net(style_weights_ft)\nprint 'Accuracy, finetuned from ImageNet initialization: %3.1f%%' % (100*accuracy, )\nscratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights_ft)\nprint 'Accuracy, finetuned from random initialization: %3.1f%%' % (100*scratch_accuracy, )",
"We'll first look back at the image we started with and check our end-to-end trained model's predictions.",
"plt.imshow(deprocess_net_image(image))\ndisp_style_preds(test_net, image)",
"Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time.\nFinally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it.",
"batch_index = 1\nimage = test_net.blobs['data'].data[batch_index]\nplt.imshow(deprocess_net_image(image))\nprint 'actual label =', style_labels[int(test_net.blobs['label'].data[batch_index])]\n\ndisp_style_preds(test_net, image)",
"We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (Pastel), but is much less confident in its prediction than the pretrained net.",
"disp_style_preds(scratch_test_net, image)",
"Of course, we can again look at the ImageNet model's predictions for the above image:",
"disp_imagenet_preds(imagenet_net, image)",
"So we did finetuning and it is awesome. Let's take a look at what kind of results we are able to get with a longer, more complete run of the style recognition dataset. Note: the below URL might be occasionally down because it is run on a research machine.\nhttp://demo.vislab.berkeleyvision.org/"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
XinyiGong/pymks | notebooks/filter.ipynb | mit | [
"Filter Example\nThis example demonstrates the connection between MKS and signal\nprocessing for a 1D filter. It shows that the filter is in fact the\nsame as the influence coefficients and, thus, applying the predict\nmethod provided by the MKSLocalizationnModel is in essence just applying a filter.",
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n",
"Here we construct a filter, $F$, such that\n$$F\\left(x\\right) = e^{-|x|} \\cos{\\left(2\\pi x\\right)} $$\nWe want to show that if $F$ is used to generate sample calibration\ndata for the MKS, then the calculated influence coefficients are in\nfact just $F$.",
"x0 = -10.\nx1 = 10.\nx = np.linspace(x0, x1, 1000)\ndef F(x):\n return np.exp(-abs(x)) * np.cos(2 * np.pi * x)\np = plt.plot(x, F(x), color='#1a9850')\n",
"Next we generate the sample data (X, y) using\nscipy.ndimage.convolve. This performs the convolution\n$$ p\\left[ s \\right] = \\sum_r F\\left[r\\right] X\\left[r - s\\right] $$\nfor each sample.",
"import scipy.ndimage\n\nn_space = 101\nn_sample = 50\nnp.random.seed(201)\nx = np.linspace(x0, x1, n_space)\nX = np.random.random((n_sample, n_space))\ny = np.array([scipy.ndimage.convolve(xx, F(x), mode='wrap') for xx in X])\n",
"For this problem, a basis is unnecessary as no discretization is\nrequired in order to reproduce the convolution with the MKS localization. Using\nthe ContinuousIndicatorBasis with n_states=2 is the equivalent of a\nnon-discretized convolution in space.",
"from pymks import MKSLocalizationModel\nfrom pymks import PrimitiveBasis\n\nprim_basis = PrimitiveBasis(n_states=2, domain=[0, 1])\nmodel = MKSLocalizationModel(basis=prim_basis)\n",
"Fit the model using the data generated by $F$.",
"model.fit(X, y)\n",
"To check for internal consistency, we can compare the predicted\noutput with the original for a few values",
"y_pred = model.predict(X)\nprint y[0, :4]\nprint y_pred[0, :4]\n",
"With a slight linear manipulation of the coefficients, they agree perfectly with the shape of the filter, $F$.",
"plt.plot(x, F(x), label=r'$F$', color='#1a9850')\nplt.plot(x, -model.coeff[:,0] + model.coeff[:, 1], \n 'k--', label=r'$\\alpha$')\nl = plt.legend()",
"Some manipulation of the coefficients is required to reproduce the filter. Remember the convolution for the MKS is\n$$ p \\left[s\\right] = \\sum_{l=0}^{L-1} \\sum_{r=0}^{S - 1} \\alpha[l, r] m[l, s - r] $$\nHowever, when the primitive basis is selected, the MKSLocalizationModel solves a modified form of this. There are always redundant coefficients since\n$$ \\sum\\limits_{l=0}^{L-1} m[l, s] = 1 $$\nThus, the regression in Fourier space must be done with categorical variables, and the regression takes the following form.\n$$ \\begin{split}\np [s] & = \\sum_{l=0}^{L - 1} \\sum_{r=0}^{S - 1} \\alpha[l, r] m[l, s -r] \\\nP [k] & = \\sum_{l=0}^{L - 1} \\beta[l, k] M[l, k] \\\n&= \\beta[0, k] M[0, k] + \\beta[1, k] M[1, k]\n\\end{split}\n$$\nwhere\n$$\\beta[0, k] = \\begin{cases}\n\\langle F(x) \\rangle ,& \\text{if } k = 0\\\n0, & \\text{otherwise}\n\\end{cases} $$\nThis removes the redundancies from the regression, and we can reproduce the filter."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | apache-2.0 | [
"LAB 3c: BigQuery ML Model Deep Neural Network.\nLearning Objectives\n\nCreate and evaluate DNN model with BigQuery ML\nCreate and evaluate DNN model with feature engineering with ML.TRANSFORM.\nCalculate predictions with BigQuery's ML.PREDICT\n\nIntroduction\nIn this notebook, we will create multiple deep neural network models to predict the weight of a baby before it is born, using first no feature engineering and then the feature engineering from the previous lab using BigQuery ML.\nWe will create and evaluate a DNN model using BigQuery ML, with and without feature engineering using BigQuery's ML.TRANSFORM and calculate predictions with BigQuery's ML.PREDICT. If you need a refresher, you can go back and look how we made a baseline model in the notebook BQML Baseline Model or how we combined linear models with feature engineering in the notebook BQML Linear Models with Feature Engineering.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nLoad necessary libraries\nCheck that the Google BigQuery library is installed and if not, install it.",
"%%bash\nsudo pip freeze | grep google-cloud-bigquery==1.6.1 || \\\nsudo pip install google-cloud-bigquery==1.6.1",
"Verify tables exist\nRun the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.",
"%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_train\nLIMIT 0\n\n%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_eval\nLIMIT 0",
"Lab Task #1: Model 4: Increase complexity of model using DNN_REGRESSOR\nDNN_REGRESSOR is a new regression model_type vs. the LINEAR_REG that we have been using in previous labs.\n\n\nMODEL_TYPE=\"DNN_REGRESSOR\"\n\n\nhidden_units: List of hidden units per layer; all layers are fully connected. Number of elements in the array will be the number of hidden layers. The default value for hidden_units is [Min(128, N / (𝜶(Ni+No)))] (1 hidden layer), with N the training data size, Ni, No the input layer and output layer units, respectively, 𝜶 is constant with value 10. The upper bound of the rule will make sure the model won’t be over fitting. Note that, we currently have a model size limitation to 256MB.\n\n\ndropout: Probability to drop a given coordinate during training; dropout is a very common technique to avoid overfitting in DNNs. The default value is zero, which means we will not drop out any coordinate during training.\n\n\nbatch_size: Number of samples that will be served to train the network for each sub iteration. The default value is Min(1024, num_examples) to balance the training speed and convergence. Serving all training data in each sub-iteration may lead to convergence issues, and is not advised.\n\n\nCreate DNN_REGRESSOR model\nChange model type to use DNN_REGRESSOR, add a list of integer HIDDEN_UNITS, and add an integer BATCH_SIZE.\n* Hint: Create a model_4.",
"%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.model_4\nOPTIONS (\n # TODO: Add DNN options\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n # TODO: Add base features and label\nFROM\n babyweight.babyweight_data_train",
"Get training information and evaluate\nLet's first look at our training statistics.",
"%%bigquery\nSELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4)",
"Now let's evaluate our trained model on our eval dataset.",
"%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.model_4,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks\n FROM\n babyweight.babyweight_data_eval\n ))",
"Let's use our evaluation's mean_squared_error to calculate our model's RMSE.",
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.model_4,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks\n FROM\n babyweight.babyweight_data_eval\n ))",
"Lab Task #2: Final Model: Apply the TRANSFORM clause\nBefore we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause as we did in the last notebook. This way we can have the same transformations applied for training and prediction without modifying the queries.\nLet's apply the TRANSFORM clause to the final model and run the query.",
"%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.final_model\n\nTRANSFORM(\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n # TODO: Add FEATURE CROSS of:\n # is_male, bucketed_mother_age, plurality, and bucketed_gestation_weeks\n\nOPTIONS (\n # TODO: Add DNN options\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n *\nFROM\n babyweight.babyweight_data_train",
"Let's first look at our training statistics.",
"%%bigquery\nSELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model)",
"Now let's evaluate our trained model on our eval dataset.",
"%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.final_model,\n (\n SELECT\n *\n FROM\n babyweight.babyweight_data_eval\n ))",
"Let's use our evaluation's mean_squared_error to calculate our model's RMSE.",
"%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.final_model,\n (\n SELECT\n *\n FROM\n babyweight.babyweight_data_eval\n ))",
"Lab Task #3: Predict with final model.\nNow that you have evaluated your model, the next step is to use it to predict the weight of a baby before it is born, using BigQuery ML.PREDICT function.\nPredict from final model using an example from original dataset",
"%%bigquery\nSELECT\n *\nFROM\n ML.PREDICT(MODEL babyweight.final_model,\n (\n SELECT\n # TODO Add base features example from original dataset\n ))",
"Modify above prediction query using example from simulated dataset\nUse the feature values you made up above, however set is_male to \"Unknown\" and plurality to \"Multiple(2+)\". This is simulating us not knowing the gender or the exact plurality.",
"%%bigquery\nSELECT\n *\nFROM\n ML.PREDICT(MODEL babyweight.final_model,\n (\n SELECT\n # TODO Add base features example from simulated dataset\n ))",
"Lab Summary:\nIn this lab, we created and evaluated a DNN model using BigQuery ML, with and without feature engineering using BigQuery's ML.TRANSFORM and calculated predictions with BigQuery's ML.PREDICT.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/kfp-tekton-backend | samples/core/ai_platform/ai_platform.ipynb | apache-2.0 | [
"Chicago Crime Prediction Pipeline\nAn example notebook that demonstrates how to:\n* Download data from BigQuery\n* Create a Kubeflow pipeline\n* Include Google Cloud AI Platform components to train and deploy the model in the pipeline\n* Submit a job for execution\n* Query the final deployed model\nThe model forecasts how many crimes are expected to be reported the next day, based on how many were reported over the previous n days.\nImports",
"%%capture\n\n# Install the SDK (Uncomment the code if the SDK is not installed before)\n!python3 -m pip install 'kfp>=0.1.31' --quiet\n!python3 -m pip install pandas --upgrade -q\n\n# Restart the kernel for changes to take effect\n\nimport json\n\nimport kfp\nimport kfp.components as comp\nimport kfp.dsl as dsl\n\nimport pandas as pd\n\nimport time",
"Pipeline\nConstants",
"# Required Parameters\nproject_id = '<ADD GCP PROJECT HERE>'\noutput = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash\n\n\n# Optional Parameters\nREGION = 'us-central1'\nRUNTIME_VERSION = '1.13'\nPACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz'])\nTRAINER_OUTPUT_GCS_PATH = output + '/train/output/' + str(int(time.time())) + '/'\nDATA_GCS_PATH = output + '/reports.csv'\nPYTHON_MODULE = 'trainer.task'\nPIPELINE_NAME = 'Chicago Crime Prediction'\nPIPELINE_FILENAME_PREFIX = 'chicago'\nPIPELINE_DESCRIPTION = ''\nMODEL_NAME = 'chicago_pipeline_model' + str(int(time.time()))\nMODEL_VERSION = 'chicago_pipeline_model_v1' + str(int(time.time()))",
"Download data\nDefine a download function that uses the BigQuery component",
"bigquery_query_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml')\n\nQUERY = \"\"\"\n SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day\n FROM `bigquery-public-data.chicago_crime.crime`\n GROUP BY day\n ORDER BY day\n\"\"\"\n\ndef download(project_id, data_gcs_path):\n\n return bigquery_query_op(\n query=QUERY,\n project_id=project_id,\n output_gcs_path=data_gcs_path\n )",
"Train the model\nRun training code that will pre-process the data and then submit a training job to the AI Platform.",
"mlengine_train_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/train/component.yaml')\n\ndef train(project_id,\n trainer_args,\n package_uris,\n trainer_output_gcs_path,\n gcs_working_dir,\n region,\n python_module,\n runtime_version):\n\n return mlengine_train_op(\n project_id=project_id, \n python_module=python_module,\n package_uris=package_uris,\n region=region,\n args=trainer_args,\n job_dir=trainer_output_gcs_path,\n runtime_version=runtime_version\n )",
"Deploy model\nDeploy the model with the ID given from the training step",
"mlengine_deploy_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml')\n\ndef deploy(\n project_id,\n model_uri,\n model_id,\n model_version,\n runtime_version):\n \n return mlengine_deploy_op(\n model_uri=model_uri,\n project_id=project_id, \n model_id=model_id, \n version_id=model_version, \n runtime_version=runtime_version, \n replace_existing_version=True, \n set_default=True)",
"Define pipeline",
"@dsl.pipeline(\n name=PIPELINE_NAME,\n description=PIPELINE_DESCRIPTION\n)\n\ndef pipeline(\n data_gcs_path=DATA_GCS_PATH,\n gcs_working_dir=output,\n project_id=project_id,\n python_module=PYTHON_MODULE,\n region=REGION,\n runtime_version=RUNTIME_VERSION,\n package_uris=PACKAGE_URIS,\n trainer_output_gcs_path=TRAINER_OUTPUT_GCS_PATH,\n): \n download_task = download(project_id,\n data_gcs_path)\n\n train_task = train(project_id,\n json.dumps(\n ['--data-file-url',\n '%s' % download_task.outputs['output_gcs_path'],\n '--job-dir',\n output]\n ),\n package_uris,\n trainer_output_gcs_path,\n gcs_working_dir,\n region,\n python_module,\n runtime_version)\n \n deploy_task = deploy(project_id,\n train_task.outputs['job_dir'],\n MODEL_NAME,\n MODEL_VERSION,\n runtime_version) \n return True\n\n# Reference for invocation later\npipeline_func = pipeline",
"Submit the pipeline for execution",
"pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})\n\n# Run the pipeline on a separate Kubeflow Cluster instead\n# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)\n# pipeline = kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(pipeline, arguments={})",
"Wait for the pipeline to finish",
"run_detail = pipeline.wait_for_run_completion(timeout=1800)\nprint(run_detail.run.status)",
"Use the deployed model to predict (online prediction)",
"import os\nos.environ['MODEL_NAME'] = MODEL_NAME\nos.environ['MODEL_VERSION'] = MODEL_VERSION",
"Create normalized input representing 14 days prior to prediction day.",
"%%writefile test.json\n{\"lstm_input\": [[-1.24344569, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387 , -0.90387016]]}\n\n!gcloud ai-platform predict --model=$MODEL_NAME --version=$MODEL_VERSION --json-instances=test.json",
"Examine cloud services invoked by the pipeline\n\nBigQuery query: https://console.cloud.google.com/bigquery?page=queries (click on 'Project History')\nAI Platform training job: https://console.cloud.google.com/ai-platform/jobs\nAI Platform model serving: https://console.cloud.google.com/ai-platform/models\n\nClean models",
"# !gcloud ai-platform versions delete $MODEL_VERSION --model $MODEL_NAME\n# !gcloud ai-platform models delete $MODEL_NAME"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs | development/tutorials/beaming_boosting.ipynb | gpl-3.0 | [
"Beaming and Boosting\nDue to concerns about accuracy, support for Beaming & Boosting has been disabled as of the 2.2 release of PHOEBE (although we hope to bring it back in a future release).\nIt may come as surprise that support for Doppler boosting has been dropped in PHOEBE 2.2. This document details the underlying causes for that decision and explains the conditions that need to be met for boosting to be re-incorporated into PHOEBE.\nLet's start by reviewing the theory behind Doppler boosting. The motion of the stars towards or away from the observer changes the amount of received flux due to three effects:\n\nthe spectrum is Doppler-shifted, so the flux, being the passband-weighted integral of the spectrum, changes;\nthe photons' arrival rate changes due to time dilation; and\nradiation is beamed in the direction of motion due to light aberration.\n\nIt turns out that the combined boosting signal can be written as:\n$$ I_\\lambda = I_{\\lambda,0} \\left( 1 - B(\\lambda) \\frac{v_r}c \\right), $$\nwhere $I_{\\lambda,0}$ is the intrinsic (rest-frame) passband intensity, $I_\\lambda$ is the boosted passband intensity, $v_r$ is radial velocity, $c$ is the speed of light and $B(\\lambda)$ is the boosting index:\n$$ B(\\lambda) = 5 + \\frac{\\mathrm{d}\\,\\mathrm{ln}\\, I_\\lambda}{\\mathrm{d}\\,\\mathrm{ln}\\, \\lambda}. $$\nThe term $\\mathrm{d}(\\mathrm{ln}\\, I_\\lambda) / \\mathrm{d}(\\mathrm{ln}\\, \\lambda)$ is called spectral index. As $I_\\lambda$ depends on $\\lambda$, we average it across the passband:\n$$ B_\\mathrm{pb} = \\frac{\\int_\\lambda \\mathcal{P}(\\lambda) \\mathcal S(\\lambda) B(\\lambda) \\mathrm d\\lambda}{\\int_\\lambda \\mathcal{P}(\\lambda) \\mathcal S(\\lambda) \\mathrm d\\lambda}. $$\nIn what follows we will code up these steps and demonstrate the inherent difficulty of realizing a robust, reliable treatment of boosting.\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.4,<2.5\"",
"Import all python modules that we'll need:",
"import phoebe\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom astropy.io import fits",
"Pull a set of Sun-like emergent intensities as a function of $\\mu = \\cos \\theta$ from the Castelli and Kurucz database of model atmospheres (the necessary file can be downloaded from here):",
"wl = np.arange(900., 39999.501, 0.5)/1e10\nwith fits.open('T06000G40P00.fits') as hdu:\n Imu = 1e7*hdu[0].data",
"Grab only the normal component for testing purposes:",
"Inorm = Imu[-1,:]",
"Now let's load a Johnson V passband and the transmission function $P(\\lambda)$ contained within:",
"pb = phoebe.get_passband('Johnson:V')",
"Tesselate the wavelength interval to the range covered by the passband:",
"keep = (wl >= pb.ptf_table['wl'][0]) & (wl <= pb.ptf_table['wl'][-1])\nInorm = Inorm[keep]\nwl = wl[keep]",
"Calculate $S(\\lambda) P(\\lambda)$ and plot it, to make sure everything so far makes sense:",
"plt.plot(wl, Inorm*pb.ptf(wl), 'b-')\nplt.show()",
"Now let's compute the term $\\mathrm{d}(\\mathrm{ln}\\, I_\\lambda) / \\mathrm{d}(\\mathrm{ln}\\, \\lambda)$. First we will compute $\\mathrm{ln}\\,\\lambda$ and $\\mathrm{ln}\\,I_\\lambda$ and plot them:",
"lnwl = np.log(wl)\nlnI = np.log(Inorm)\n\nplt.xlabel(r'$\\mathrm{ln}\\,\\lambda$')\nplt.ylabel(r'$\\mathrm{ln}\\,I_\\lambda$')\nplt.plot(lnwl, lnI, 'b-')\nplt.show()",
"Per equation above, $B(\\lambda)$ is then the slope of this curve (plus 5). Herein lies the problem: what part of this graph do we fit a line to? In versions 2 and 2.1, PHOEBE used a 5th order Legendre polynomial to fit the spectrum and then sigma-clipping to get to the continuum. Finally, it computed an average derivative of that Legendrian and proclaimed that $B(\\lambda)$. The order of the Legendre polynomial and the values of sigma for sigma-clipping have been set ad-hoc and kept fixed for every single spectrum.",
"envelope = np.polynomial.legendre.legfit(lnwl, lnI, 5)\ncontinuum = np.polynomial.legendre.legval(lnwl, envelope)\ndiff = lnI-continuum\nsigma = np.std(diff)\nclipped = (diff > -sigma)\nwhile True:\n Npts = clipped.sum()\n envelope = np.polynomial.legendre.legfit(lnwl[clipped], lnI[clipped], 5)\n continuum = np.polynomial.legendre.legval(lnwl, envelope)\n diff = lnI-continuum\n clipped = clipped & (diff > -sigma)\n if clipped.sum() == Npts:\n break\n\nplt.xlabel(r'$\\mathrm{ln}\\,\\lambda$')\nplt.ylabel(r'$\\mathrm{ln}\\,I_\\lambda$')\nplt.plot(lnwl, lnI, 'b-')\nplt.plot(lnwl, continuum, 'r-')\nplt.show()",
"It is clear that there is a pretty strong systematics here that we sweep under the rug. Thus, we need to revise the way we compute the spectral index and make it robust before we claim that we support boosting.\nFor fun, this is what would happen if we tried to estimate $B(\\lambda)$ at each $\\lambda$:",
"dlnwl = lnwl[1:]-lnwl[:-1]\ndlnI = lnI[1:]-lnI[:-1]\nB = dlnI/dlnwl\n\nplt.plot(0.5*(wl[1:]+wl[:-1]), B, 'b-')\nplt.show()",
"Numerical artifacts dominate and there is little hope to get a sensible (let alone robust) value using this method."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
srippa/nn_deep | NN playground.ipynb | mit | [
"resourcus\n\nI am trask blog - simple introduction to NN\nNeural bnetwork tutorial - walk all the way. A similar tutorial\nAndrew Ng ML course\nPedro domingos course\nNN papers\nBrief introduction to deep learning. Based on the Deep learning lab\n\nCourses\n\nCSC321 Winter 2015: Introduction to NN - Toronto\n\nPapers\n\nDeep learning in Neural Networks: An Overview (2014)\n\nCode\n\nhttp://upul.github.io/2015/10/12/Training-(deep)-Neural-Networks-Part:-1/ - example of backpropagation in Python",
"import numpy as np\nimport matplotlib ",
"General terminology and notations\nThe notation in here follow the notations in Chapter 2 of the deep learning online book. We note that there are different notations in different places, all refer differently to the same mathematical construct.\n\nWe consider $L$ layers marked by $l=1,\\ldots,L$ where $l=1$ denotes the input layer\nLayer $l$ has $s_l$ units referred to by $a^{l}_j, j = 1,\\ldots,s_l$.\n\nThe matrix $W^l : s_{l} \\times s_{l-1} , l=2,\\ldots, L$ controls the mapping from layer $l-1$ to layer $l$. The vector $\\mathbf{b}^l$ of size $s_{l}$ corresponds to the bias term and in layer $l$. The weight $w^l_{ij}$ is the weight associated with the connection of neuron $j$ in layer $l-1$ to neuron $i$ in layer $l$\n\n\nForward propagation: $\\mathbf{a}^l = \\sigma(\\mathbf{z}^l)$ where $\\mathbf{z}^l=W^l \\mathbf{a}^{(l-1)}+ \\mathbf{b}^{(l)} , l =2,\\ldots,L$ where the activation function $\\sigma \\equiv \\sigma_l$ is applied to each component of its argument vector. For simplicity of notations we often write $\\sigma$ instead of $\\sigma_l$\n\n\nSynonims\n\nNeuron - inspired from biology analogy\nUnit - It’s one component of a large network\nFeature - It implements a feature detector that’s looking at the input and will turn on iff the sought feature is present in the input\n\nA note about dot product\nThe activation function works on the outcome of the done way to think about this is in terms of correlation which are normalized dot products. Thus what we really measure is the degree of correlation, or dependence, between the input vector and the coefficient vector. We can view at the dot product as:\n* A correlation filter - fires if a correlation between input and weights exceeds a threshold\n* A feature detector - Detect if a specific pattern occur in the input\nOutput unit\nThe output values are computed by similarly multplying the values oh $h$ by another weight matrix,\n$\\def \\mathbf \\mathbf {}$\n\n$\\mathbf{a}^L = \\sigma_L(\\mathbf{W^L} \\cdot \\mathbf{a}^{(L-1)} + \\mathbf{b}^L) = \\sigma_L(\\mathbf{z^L}) $\n\nLinear regression network\nDefined when $\\sigma_L=I$. In that case the output is in a form suitable for linear regression.\nSoftmax function\nFor classification problems we want to convert them into probabilities. This is achieved by using the softmax function.\n\n$\\sigma_L(z) = \\frac{1}{\\alpha}e^z$ where $\\alpha = \\sum_i e^{z^L_i}$ which produces an output vector $\\mathbf{a}^L \\equiv \\mathbf{y} = (y_1,\\ldots,y_{s_L}), y_i = \\frac{e^{z^L_i}}{\\sum_i e^{z^L_i}} , i = 1,\\ldots, s_L$ \n\nThe element $y_i$ is the probability that the label of the output is $i$. This is indeed the same expression utilized by logistic regression for classification of many labels. The label $i^$ that corresponds to a given input vector $\\mathbf{a}^1$ ise selected as the index $i^$ for which $y_i$ is maximal.\nPopular types of activation functions\nExample of some popular activation functions:\n * Sigmoid: Transfor inner product into an S shaped curve. There are several popular alternatives for a Sigmoid activation function:\n * The logistic function: $\\sigma(z) = \\frac{1}{1+ e^{-z}}$ hase values in [0,1] and thus can be interperable as probabiliy.\n * Hyperbolic tangent: $\\sigma(z) = \\frac{e^z - e^{-z}}{e^z + e^{-z}}$ with values in $(-1,1)$\n * [Rectifier](https://en.wikipedia.org/wiki/Rectifier_(neural_networks): $\\sigma(z) = \\max(0,z)$. A unit that user a rectifier function is called a rectified linear unit (ReLU).\n * [softplus](https://en.wikipedia.org/wiki/Rectifier_(neural_networks): $\\sigma(z) = \\ln (1+e^z)$ is a smooth approximation to the rectifier function. \nSynonyms for the term \"unit activation\"\n\nUnit's value: View it as a function of the input\nActivation: Emphasizes that the unit may be responding or not, or to an extent; it’s most appropriate for logistic units\nOutput\n\nPython example : some activation functions",
"def sigmoid(x):\n return 1./(1.+np.exp(-x))\n\ndef rectifier(x):\n return np.array([max(xv,0.0) for xv in x])\n\ndef softplus(x):\n return np.log(1.0 + np.exp(x))\n\nx = np.array([1.0,0,0])\nw = np.array([0.2,-0.03,0.14])\nprint ' Scalar product between unit and weights ',x.dot(w)\nprint ' Values of Sigmoid activation function ',sigmoid(x.dot(w))\nprint ' Values of ta activation function ',np.tanh(x.dot(w))\nprint ' Values of sofplus activation function ',softplus(x.dot(w))\n\n\nimport pylab\n\nz = np.linspace(-2,2,100) # 100 linearly spaced numbers\ns = sigmoid(z) # computing the values of \nth = np.tanh(z) # computing the values of \nre = rectifier(z) # computing the values of rectifier\nsp = softplus(z) # computing the values of rectifier\n\n# compose plot\npylab.plot(z,s) \npylab.plot(z,s,'co',label='Sigmoid') # Sigmoid \npylab.plot(z,th,label='tanh') # tanh\npylab.plot(z,re,label='rectifier') # rectifier\npylab.plot(z,sp,label='softplut') # rectifier\npylab.legend()\npylab.show() # show the plot",
"Python example : Simple feed forward classification NN",
"def softmax(z):\n alpha = np.sum(np.exp(z))\n return np.exp(z)/alpha\n\n# Input\na0 = np.array([1.,0,0])\n\n# First layer\nW1 = np.array([[0.2,0.15,-0.01],[0.01,-0.1,-0.06],[0.14,-0.2,-0.03]])\nb1 = np.array([1.,1.,1.])\nz1 = W1.dot(a0) + b1\na1 = np.tanh(z1)\n\n# Output layer\nW2 = np.array([[0.08,0.11,-0.3],[0.1,-0.15,0.08],[0.1,0.1,-0.07]])\nb2 = np.array([0.,1.,0.])\nz2 = W2.dot(a1) + b2\na2 = y = softmax(z2)\nimax = np.argmax(y)\n\nprint ' z1 ',z1\nprint ' a1 ',np.tanh(z1)\nprint ' z2 ',z2\nprint ' y ',y\nprint ' Input vector {0} is classified to label {1} '.format(a0,imax)\n\n\nprint '\\n'\nfor i in [0,1,2]:\n print 'The probablity for classifying to label ',i,' is ',y[i]",
"Cost (or error) functions\nSuppose that the expected output for an input vector ${\\bf x} \\equiv {\\bf a^1}$ is ${\\bf y} = {\\bf y_x}^ = (0,1,0)$, we can now compute the error vector ${\\bf e}= {\\bf e_x}= {\\bf a_x}^L-{\\bf y_x}^$. With this error, we can now compute a cost $C=C_x$ assotiated with the output $\\bf{y_x}$ of the input vector ${\\bf x}$ (also called loss) function. For convinience of notations we will frequently omitt the subscript $x$.\nPopular loss functions are:\n* Absolute cost $C = C({\\bf a}^L)=\\sum_i |e_i|$\n* Square cost $C= C({\\bf a}^L) = \\sum_i e_i^2$\n* Cross entropy loss $C=C({\\bf a}^L) = -\\sum_i y_i^\\log{a^L_i} \\equiv -\\sum_i y_i^\\log{y_i}$. The rationale here is that the output of the softmax function is a probability distribution and we can also view the real label vector $y$ as a probability distribution (1 for the corerct label and 0 for all other labels). The cross entropy function is a common way to measure difference between distributions.\nThe total error from all $N$ data vectors is computed as the average of the individual error terms associated with each input vector ${\\bf x}$, that is:$\\frac{1}{N} \\sum_x C_x$",
"def abs_loss(e):\n return np.sum(np.abs(e))\n\ndef sqr_loss(e):\n return np.sum(e**2)\n\ndef cross_entropy_loss(y_estimated,y_real):\n return -np.sum(y_real*np.log(y_estimated))\n\ny_real = np.array([0.,1.,0])\nerr = a2 - ystar\n\nprint ' Error ',err\nprint ' Absolute loss ',abs_loss(err)\nprint ' Square loss ',sqr_loss(err)\nprint ' Cross entropy loss ',cross_entropy_loss(a2,y_real)\n\n",
"Backpropagation\nBackpropagation is a fast way of computing the derivatives $\\frac{\\partial C}{\\partial w^l_{ij}}$ and $\\frac{\\partial C}{\\partial b_i}$ which are needed for the Stochastic Gradient Descent procedure used for minimizing the cost function. Backpropagation is a special case of a more general technique called reverse mode automatic differentiation. The backpropagation algorithm is a smart application of the chain rule to allow efficient calculation of needed derivatives. A detailed discussion on the derivation of backpropagation if provided in this tutorial. An example for a simple python implementation is provided in here.\nhttp://upul.github.io/2015/10/12/Training-(deep)-Neural-Networks-Part:-1/\nDerivation of backpropagation\nDefine the vector ${\\mathbf \\delta}^l= \\frac{\\partial C}{\\partial z^l}$, that is $\\delta^l_i = \\frac{\\partial C}{\\partial z^l_i}, i =1,\\ldots,s_l$. \nRecall that $z^{l+1}i = \\sum_j w^{l+1}{ij} a^l_j + b^{l+1}i = \\sum_j w^{l+1}{ij} \\sigma_l(z^l_j) + b^{l+1}_i$ Then we have\n* $\\delta^L_i = \\frac{\\partial C}{\\partial z_i^L} = \\sum_k \\frac{\\partial C}{\\partial a_k^L} \\frac{\\partial a_k^L}{\\partial z_i^L} = \\frac{\\partial C}{\\partial a_i^L} \\frac{\\partial a_i^L}{\\partial z_i^L}= \\frac{\\partial C}{\\partial a^L_i} \\sigma'_L ( z^L_i)$ \n\n\n$\\delta^l_i = \\sum_k \\frac{\\partial C}{\\partial z_k^{l+1}} \\frac{\\partial z_k^{l+1}}{\\partial z_i^L} = \\sum_k \\delta_k^{l+1} w_{ki}^{l+1} \\sigma'_l(z_i^l) = \\sigma'_l(z_i^l) \\cdot ((W^{l+1})^T \\delta^{l+1})_i $\n\n\n$\\frac{\\partial C}{\\partial b^l_{i}} = \\delta^{l}_i$\n\n\n$\\frac{\\partial C}{\\partial w^l_{ij}} = \\frac{\\partial C}{\\partial z^{l}{i}} \\frac{\\partial z^{l}{i}}{\\partial w^{l}_{ij}} = \\delta^{l}_i a^{l-1}_j$\n\n\nIn vector form:\n\n\n$\\delta^L = \\frac{\\partial C}{\\partial {\\bf a}^L} \\odot $$\\sigma'_L ({\\bf z}^L)$ where $\\odot$ is the Hadamard elementwise product.\n\n\n$\\delta^l = (W^{l+1})^T \\delta^{l+1} \\odot \\sigma'_l({\\bf z}^l)$\n\n\n$\\frac{\\partial C}{\\partial b^l} = \\delta^{l}$\n\n\n$\\frac{\\partial C}{\\partial w^l_{ij}} = \\delta^{l}_i a^{l-1}_j$\n\n\nhttp://karpathy.github.io/neuralnets/\nGradient descent example: http://upul.github.io/2015/10/12/Training-(deep)-Neural-Networks-Part:-1/\nhttp://ufldl.stanford.edu/tutorial/supervised/OptimizationStochasticGradientDescent/\nSGD tricks: http://research.microsoft.com/pubs/192769/tricks-2012.pdf\nhttp://www.marekrei.com/blog/26-things-i-learned-in-the-deep-learning-summer-school/\nhttp://code.activestate.com/recipes/578148-simple-back-propagation-neural-network-in-python-s/\nhttp://deeplearning.net/tutorial/\nhttp://stackoverflow.com/questions/15395835/simple-multi-layer-neural-network-implementation"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statkraft/shyft-doc | notebooks/repository/repositories-intro.ipynb | lgpl-3.0 | [
"Exposing the API\nIntroduction\nAt its core, Shyft provides functionality through an API (Application Programming Interface). All the functionality of Shyft is available through this API.\nWe begin the tutorials by introducing the API as it provides the building blocks for the framework. Once you have a good understan\nIn Part I of the simulation tutorials, we covered conducting a very simple simulation of an example catchment using configuration files. This is a typical use case, but assumes that you have a model well configured and ready for simulation. In practice, one is interested in working with the model, testing different configurations, and evaluating different data sources.\nThis is in fact a key idea of Shyft -- to make it simple to evaluate the impact of the selection of model routine on the performance of the simulation. In this notebook we walk through a lower level paradigm of working with the toolbox and using the Shyft API directly to conduct the simulations.\nThis notebook is guiding through the simulation process of a catchment. The following steps are described:\n1. Loading required python modules and setting path to SHyFT installation\n2. Running of a Shyft simulation\n3. Running a Shyft simulation with updated parameters\n4. Activating the simulation only for selected catchments\n5. Setting up different input datasets\n6. Changing state collection settings\n7. Post processing and extracting results\n1. Loading required python modules and setting path to SHyFT installation\nShyft requires a number of different modules to be loaded as part of the package. Below, we describe the required steps for loading the modules, and note that some steps are only required for the use of the jupyter notebook.",
"# Pure python modules and jupyter notebook functionality\n# first you should import the third-party python modules which you'll use later on\n# the first line enables that figures are shown inline, directly in the notebook\n%matplotlib inline\nimport os\nimport datetime as dt\nimport pandas as pd\nfrom os import path\nimport sys\nfrom matplotlib import pyplot as plt\nfrom netCDF4 import Dataset",
"The Shyft Environment\nThis next step is highly specific on how and where you have installed Shyft. If you have followed the guidelines at github, and cloned the three shyft repositories: i) shyft, ii) shyft-data, and iii) shyft-doc, then you may need to tell jupyter notebooks where to find shyft. Uncomment the relevant lines below.\nIf you have a 'system' shyft, or used conda install -s sigbjorn shyft to install shyft, then you probably will want to make sure you have set the SHYFT_DATA directory correctly, as otherwise, Shyft will assume the above structure and fail. This has to be done before import shyft. In that case, uncomment the relevant lines below.\nnote: it is most likely that you'll need to do one or the other.",
"# try to auto-configure the path, -will work in all cases where doc and data\n# are checked out at same level\nshyft_data_path = path.abspath(\"../../../shyft-data\")\nif path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ:\n os.environ['SHYFT_DATA']=shyft_data_path\n \n# shyft should be available either by it's install in python\n# or by PYTHONPATH set by user prior to starting notebook.\n# This is equivalent to the two lines below\n# shyft_path=path.abspath('../../../shyft')\n# sys.path.insert(0,shyft_path)\n\nfrom shyft import api\nimport shyft\n\nprint(shyft.__path__)",
"2. A Shyft simulation\nThe purpose of this notebook is to demonstrate setting up a Shyft simulation using existing repositories. Eventually, you will want to learn to write your own repositories, but once you understand what is presented herein, you'll be well on your way to working with Shyft.\nIf you prefer to take a high level approach, you can start by looking at the Run Nea Nidelva notebook. We recommend taking the time to understand the lower level functionality of Shyft, however, as it will be of value later if you want to use your own data and create your own repositories.\nOrchestration and Repositories\nA core philosophy of Shyft is that \"Data should live at the source\". What this means, is that we prefer datasets to either remain in their original format or even come directly from the data provider. To accomplish this, we use \"repositories\". You can read more about repositories at the Shyft Documentation.\nInterfaces\nBecause it is our hope that users will create their own repositories to meet the specifications of their own datasets, we provide 'interfaces'. This is a programming concept that you may not be familiar with. The idea is that it is a basic example, or template, of how the class should work. You can use these and your own class can inherit from them, allowing you to override methods to meet your own specifications. We'll explore this as we move through this tutorial. A nice explanation of interfaces with python is available here.\nInitial Configuration\nWhat is required to set up a simulation? In the following we'll package some basic information into a dictionaries that may be used to configure our simualtion. We'll start by creating a couple of dictionaries that will be used to instantiate an existing repository class that was created for demonstration purposes, CFRegionModelRepository.\nIf it hasn't been said enough, there is a lot of functionality in the repositories! You can write a repository to suit your own use case, and it is encouraged to look at this source code.",
"# we need to import the repository to use it in a dictionary:\nfrom shyft.repository.netcdf.cf_region_model_repository import CFRegionModelRepository",
"region specification\nThe first dictionary essentially establishes the domain of the simulation. We also specify a repository that is used to read the data that will provide Shyft a region_model (discussed below), based on geographic data. The geographic consists of properties of the catchment, e.g. \"forest fraction\", \"lake fraction\", etc.",
"# next, create the simulation dictionary\nRegionDict = {'region_model_id': 'demo', #a unique name identifier of the simulation\n 'domain': {'EPSG': 32633,\n 'nx': 400,\n 'ny': 80,\n 'step_x': 1000,\n 'step_y': 1000,\n 'lower_left_x': 100000,\n 'lower_left_y': 6960000},\n 'repository': {'class': shyft.repository.netcdf.cf_region_model_repository.CFRegionModelRepository,\n 'params': {'data_file': 'netcdf/orchestration-testdata/cell_data.nc'}},\n }",
"The first keys, are probably quite clear:\n\nstart_datetime: a string in the format: \"2013-09-01T00:00:00\"\nrun_time_step: an integer representing the time step of the simulation (in seconds), so for a daily step: 86400\nnumber_of_steps: an integer for how long the simulatoin should run: 365 (for a year long simulation)\nregion_model_id: a string to name the simulation: 'neanidelva-ptgsk'\n\nWe also need to know where the simulation is taking place. This information is contained in the domain:\n\nEPSG: an EPSG string to identify the coordinate system\nnx: number of 'cells' in the x direction\nny: number of 'cells' in the y direction\nstep_x: size of cell in x direction (m)\nstep_y: size of cell in y direction (m)\nlower_left_x: where (x) in the EPSG system the cells begin\nlower_left_y: where (y) in the EPSG system the cells begin\nrepository: a repository that can read the file containing data for the cells (in this case it will read a netcdf file)\n\nModel specification\nThe next dictionary provides information about the model that we would like to use in Shyft, or the 'Model Stack' as it is generally referred to. In this case, we are going to use the PTGSK model, and the rest of the dictionary provides the parameter values.",
"ModelDict = {'model_t': shyft.api.pt_gs_k.PTGSKModel, # model to construct\n 'model_parameters': {\n 'ae':{\n 'ae_scale_factor': 1.5},\n 'gs':{\n 'calculate_iso_pot_energy': False,\n 'fast_albedo_decay_rate': 6.752787747748934,\n 'glacier_albedo': 0.4,\n 'initial_bare_ground_fraction': 0.04,\n 'max_albedo': 0.9,\n 'max_water': 0.1,\n 'min_albedo': 0.6,\n 'slow_albedo_decay_rate': 37.17325702015658,\n 'snow_cv': 0.4,\n 'tx': -0.5752881492890207,\n 'snowfall_reset_depth': 5.0,\n 'surface_magnitude': 30.0,\n 'wind_const': 1.0,\n 'wind_scale': 1.8959672005350063,\n 'winter_end_day_of_year': 100},\n 'kirchner':{ \n 'c1': -3.336197322290274,\n 'c2': 0.33433661533385695,\n 'c3': -0.12503959620315988},\n 'p_corr': {\n 'scale_factor': 1.0},\n 'pt':{'albedo': 0.2,\n 'alpha': 1.26},\n }\n } ",
"In this dictionary we define two variables:\n\nmodel_t: the import path to a shyft 'model stack' class\nmodel_parameters: a dictionary containing specific parameter values for a particular model class\n\nSpecifics of the model_parameters dictionary will vary based on which class is used.\nOkay, so far we have two dictionaries. One which provides information regarding our simulation domain, and a second which provides information on the model that we wish to run over the domain (e.g. in each of the cells). The next step, then, is to map these together and create a region_repo class.\nThis is achieved by using a repository, in this case, the CFRegionModelRepository we imported above.",
"region_repo = CFRegionModelRepository(RegionDict, ModelDict)",
"The region_model\n<div class=\"alert alert-info\">\n\n**TODO:** a notebook documenting the CFRegionModelRepository\n\n</div>\n\nThe first step in conducting a hydrologic simulation is to define the domain of the simulation and the model type which we would like to simulate. To do this we create a region_model object. Above we created dictionaries that contain this information, and we instantiated a class called teh region_repo. In this next step, we put it together so that we have a single object which we can work with \"at our fingertips\". You'll note above that we have pointed to a 'data_file' earlier when we defined the RegionDict. This data file contains all the required elements to fill the cells of our domain. The informaiton is contained in a single netcdf file\nBefore we go further, let's look briefly at the contents of this file:",
"cell_data_file = os.path.join(os.environ['SHYFT_DATA'], 'netcdf/orchestration-testdata/cell_data.nc')\ncell_data = Dataset(cell_data_file)\nprint(cell_data)",
"You might be surprised to see the dimensions are 'cells', but recall that in Shyft everything is vectorized. Each 'cell' is an element within a domain, and each cell has associated variables:\n\nlocation: x, y, z\ncharacteristics: forest-fraction, reservoir-fraction, lake-fraction, glacier-fraction, catchment-id\n\nWe'll bring this data into our workspace via the region_model. Note that we have instantiated a region_repo class using one of the existing Shyft repositories, in this case one that was built for reading in the data as it is contained in the example shyft-data netcdf files: CFRegionModelRepository.\nNext, we'll use the region_repo.get_region_model method to get the region_model. Note the name 'demo', in this case is arbitrary. However, depending on how you create your repository, you can specify what region model to return using this string.\n<div class=\"alert alert-info\">\n\n\n**note:** *you are strongly encouraged to learn how to create repositories. This particular repository is just for demonstration purposes. In practice, one may use a repository that connects directly to a GIS service, a database, or some other data sets that contain the data required for simulations.*\n\n<div class=\"alert alert-warning\">\n\n**warning**: *also, please note that below we call the 'get_region_model' method as we instantiate the class. This behavior may change in the future.*\n\n</div>\n</div>",
"region_model = region_repo.get_region_model('demo')",
"Exploring the region_model\nSo we now have created a region_model, but what is it actually? This is a very fundamental class in Shyft. It is actually one of the \"model stacks\", such as 'PTGSK', or 'PTHSK'. Essentially, the region_model contains all the information regarding the simulation type and domain. There are many methods associated with the region_model and it will take time to understand all of them. For now, let's just explore a few key methods:\n\nbounding_region: provides information regarding the domain of interest for the simulation\ncatchment_id_map: indices of the various catchments within the domain\ncells: an instance of PTGSKCellAllVector that holds the individual cells for the simulation (note that this is type-specific to the model type)\nncore: an integer that sets the numbers of cores to use during simulation (Shyft is very greedy if you let it!)\ntime_axis: a shyft.api.TimeAxisFixedDeltaT class (basically contains information regarding the timing of the simulation)\n\nKeep in mind that many of these methods are more 'C++'-like than 'Pythonic'. This means, that in some cases, you'll have to 'call' the method. For example: region_model.bounding_region.epsg() returns a string. You can use tab-completion to explore the region_model further:",
"region_model.bounding_region.epsg()",
"You'll likely note that there are a number of intriguing fucntions, e.g. initialize_cell_environment or interpolate. But before we can go further, we need some more information. Perhaps you are wondering about forcing data. So far, we haven't said anything about model input or the time of the simulation, we've only set up a container that holds all the domain and model type information about our simulation. \nStill, we have made some progress. Let's look for instance at the cells:",
"cell_0 = region_model.cells[0]\nprint(cell_0.geo)",
"So you can see that so far, each of the cells in the region_model contain information regarding their LandTypeFractions, geolocation, catchment_id, and area. \nA particulary important attribute is region_model.region_env. This is a container for each cell that holds the \"environmental timeseries\", or forcing data, for the simulation. By \"tabbing\" from cell. you can see that each cell also has and env_ts attribute. These are containers customized to provide timeseries as required by the model type we selected, but there is no data yet. In this case we used the PTGSKModel (see the ModelDict). So for every cell in your simulation, there is a container prepared to accept the forcing data as the next cell shows.",
"#just so we don't see 'private' attributes\nprint([d for d in dir(cell_0.env_ts) if '_' not in d[0]]) \nregion_model.size()",
"Adding forcing data to the region_model\nClearly the next step is to add forcing data to our region_model object. Let's start by thinking about what kind of data we need. From above, where we looked at the env_ts attribute, it's clear that this particular model stack, PTGSKModel, requires:\n\nprecipitation\nradiation\nrelative humidity (rel_hum)\ntemperature\nwind speed\n\nWe have stored this information each in seperate netcdf files which each contain the observational series for a number of different stations. \n<div class=\"alert alert-warning\">\n\nAgain, these files **do not represent the recommended practice**, but are *only for demonstration purposes*. The idea here is just to demonstrate with an example repository, but *you should create your own to match **your** data*.\n\n</div>\n\nOur goal now is to populate the region_env. \n\"Sources\"\nWe use the term sources to define a location data may be coming from. You may also come across destinations. In both cases, it just means a file, database, service of some kind, etc. that is capable of providing data. Repositories are written to connect to sources. Following our earlier approach, we'll create another dictionary to define our data sources, but first we need to import another repository:",
"from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository\n\nfrom shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository\nForcingData = {'sources': [\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'filename': 'netcdf/orchestration-testdata/precipitation.nc'},\n 'types': ['precipitation']},\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'filename': 'netcdf/orchestration-testdata/temperature.nc'},\n 'types': ['temperature']},\n \n {'params': {'epsg': 32633,\n 'filename': 'netcdf/orchestration-testdata/wind_speed.nc'},\n 'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'types': ['wind_speed']},\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'filename': 'netcdf/orchestration-testdata/relative_humidity.nc'},\n 'types': ['relative_humidity']},\n \n {'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,\n 'params': {'epsg': 32633,\n 'filename': 'netcdf/orchestration-testdata/radiation.nc'},\n 'types': ['radiation']}]\n }\n",
"Data Repositories\nIn another notebook, further information will be provided regarding the repositories. For the time being, let's look at this configuration dictionary that was created. It essentially just contains a list, keyed by the name \"sources\". This key is known in some of the tools that are built in the Shyft orchestration, so it is recommended to use it.\nEach item in the list is a dictionary for each of the source types, the keys in the dictionaries are: repository, params, and types. The general idea and concept is that in orchestration, the object keyed by repository is a class that is instantiated by passing the objects contained in params.\nLet's repeat that. From our Datasets dictionary, we get a list of \"sources\". Each of these sources contains a class (a repository) that is capable of getting the source data into Shyft. Whatever parameters that are required for the class to work, will be included in the \"sources\" dictionary. In our case, the params are quite simple, just a path to a netcdf file. But suppose our repository required credentials or other information for a database? This information could also be included in the params stanza of the dictionary.\nYou should explore the above referenced netcdf files that are available at the shyft-data git repository. These files contain the forcing data that will be used in the example simulation. Each one contains observational data from some stations in our catchment. Depending on how you write your repository, this data may be provided to Shyft in many different formats.\nLet's explore this concept further by getting the 'temperature' data:",
"# get the temperature sources:\ntmp_sources = [source for source in ForcingData['sources'] if 'temperature' in source['types']]\n\n# in this example there is only one\nt0 = tmp_sources[0]\n\n# We will now instantiate the repository with the parameters that are provided\n# in the dictionary. \n# Note the 'call' structure expects params to contain keyword arguments, and these\n# can be anything you want depending on how you create your repository\ntmp_repo = t0['repository'](**t0['params'])\n",
"tmp_repo is now an instance of the Shyft CFDataRepository, and this will provide Shyft with the data when it sets up a simulation by reading the data directly out of the file referenced in the 'source'. But that is just one repository, and we defined many in fact. Furthermore, you may have a heterogenous collection of data sources -- if for example you want to get your temperature from station data, but radiation from model output. You could define different repositories in the ForcingData dictionary.\nUltimately, we bundle all these repositories up into a new class called a GeoTsRepositoryCollection that we can use to populate the region_model.region_env with data.",
"# we'll actually create a collection of repositories, as we have different input types.\nfrom shyft.repository.geo_ts_repository_collection import GeoTsRepositoryCollection\n\ndef construct_geots_repo(datasets_config, epsg=None):\n \"\"\" iterates over the different sources that are provided \n and prepares the repository to read the data for each type\"\"\"\n geo_ts_repos = []\n src_types_to_extract = []\n for source in datasets_config['sources']:\n if epsg is not None:\n source['params'].update({'epsg': epsg})\n # note that here we are instantiating the different source repositories\n # to place in the geo_ts list \n geo_ts_repos.append(source['repository'](**source['params']))\n src_types_to_extract.append(source['types'])\n \n return GeoTsRepositoryCollection(geo_ts_repos, src_types_per_repo=src_types_to_extract)\n\n# instantiate the repository\ngeots_repo = construct_geots_repo(ForcingData)",
"geots_repo is now a \"geographic timeseries repository\", meaning that the timeseries it holds are spatially aware of their x,y,z coordinates (see CFDataRepository for details). It also has several methods. One in particular we are interested in is the get_timeseries method. However, before we can proceed, we need to define the period for the simulation.\nShyft TimeAxis\nTime in Shyft is handled with specialized C++ types for computational efficiency. These are custom built objects that are 'calendar' aware. But since in python, most like to use datetime objects, we create a function:",
"# next, create the simulation dictionary\nTimeDict = {'start_datetime': \"2013-09-01T00:00:00\",\n 'run_time_step': 86400, # seconds, daily\n 'number_of_steps': 360 # ~ one year\n }\n\ndef time_axis_from_dict(t_dict)->api.TimeAxis:\n utc = api.Calendar()\n \n sim_start = dt.datetime.strptime(t_dict['start_datetime'], \"%Y-%m-%dT%H:%M:%S\")\n utc_start = utc.time(sim_start.year, sim_start.month, sim_start.day,\\\n sim_start.hour, sim_start.minute, sim_start.second)\n tstep = t_dict['run_time_step']\n nstep = t_dict['number_of_steps']\n time_axis = api.TimeAxis(utc_start, tstep, nstep)\n \n return time_axis\n\nta_1 = time_axis_from_dict(TimeDict)\nprint(f'1. {ta_1} \\n {ta_1.total_period()}')\n# or shyft-wise, ready tested, precise and less effort, two lines\nutc = api.Calendar() # 'Europe/Oslo' can be passed to calendar for time-zone\nta_2 = api.TimeAxis(utc.time(2013, 9, 1), api.deltahours(24), 365)\nprint(f'2. {ta_2} \\n {ta_2.total_period()}')",
"We now have an object that defines the time dimension for the simulation, and we will use this to initialize the region_model with the \"environmental timeseries\" or env_ts data. These containers will be given data from the appropriate repositories using the get_timeseries function. Following the templates in the shyft.repository.interfaces module, you'll see that the repositories should provide the capability to \"screen\" data based on time criteria and optinally* geo_location criteria.",
"# we can extract our \"bounding box\" based on the `region_model` we set up\nbbox = region_model.bounding_region.bounding_box(region_model.bounding_region.epsg())\n\nperiod = ta_1.total_period() #just defined above\n\n# required forcing data sets we want to retrieve\ngeo_ts_names = (\"temperature\", \"wind_speed\", \"precipitation\",\n \"relative_humidity\", \"radiation\")\n\nsources = geots_repo.get_timeseries( geo_ts_names, period) #, geo_location_criteria=bbox )",
"Now we have a new dictionary, called 'sources' that contains specialized Shyft api types specific to each forcing data type. You can look at one for example:",
"prec = sources['precipitation']\nprint(len(prec))\n",
"We can explore further and see each element is in itself an api.PrecipitationSource, which has a timeseries (ts). Recall from the first tutorial that we can easily convert the timeseries.time_axis into datetime values for plotting.\nLet's plot the precip of each of the sources:",
"fig, ax = plt.subplots(figsize=(15,10))\n\nfor pr in prec:\n t,p = [dt.datetime.utcfromtimestamp(t_.start) for t_ in pr.ts.time_axis], pr.ts.values\n ax.plot(t,p, label=pr.mid_point().x) #uid is empty now, but we reserve for later use\nfig.autofmt_xdate()\nax.legend(title=\"Precipitation Input Sources\")\nax.set_ylabel(\"precip[mm/hr]\")",
"Finally, the next step will take the data from the sources and connect it to our region_model.region_env class:",
"def get_region_environment(sources):\n region_env = api.ARegionEnvironment()\n region_env.temperature = sources[\"temperature\"]\n region_env.precipitation = sources[\"precipitation\"]\n region_env.radiation = sources[\"radiation\"]\n region_env.wind_speed = sources[\"wind_speed\"]\n region_env.rel_hum = sources[\"relative_humidity\"]\n return region_env\n\nregion_model.region_env = get_region_environment(sources)",
"And now our forcing data is connected to the region_model. We are almost ready to run a simulation. There is just one more step. We've connected the sources to the model, but remember that Shyft is a distributed modeling framework, and we've connected point data sources (in this case). So we need to get the data from the observed points to each cell. This is done through interpolation.\nShyft Interpolation\nIn Shyft there are predefined routines for interpolation. In the interp_config class below one quickly recognizes the same input source type keywords that are used as keys to the params dictionary. params is simply a dictionary of dictionaries which contains the parameters used by the interpolation model that is specific for each source type.",
"from shyft.repository.interpolation_parameter_repository import InterpolationParameterRepository\n\nclass interp_config(object):\n \"\"\" a simple class to provide the interpolation parameters \"\"\"\n\n def __init__(self):\n \n self.interp_params = {'precipitation': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10,\n 'scale_factor': 1.02}},\n 'radiation': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10}},\n 'relative_humidity': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10}},\n 'temperature': {'method': 'btk',\n 'params': {'nug': 0.5,\n 'range': 200000.0,\n 'sill': 25.0,\n 'temperature_gradient': -0.6,\n 'temperature_gradient_sd': 0.25,\n 'zscale': 20.0}},\n 'wind_speed': {'method': 'idw',\n 'params': {'distance_measure_factor': 1.0,\n 'max_distance': 600000.0,\n 'max_members': 10}}}\n\n def interpolation_parameters(self):\n return self.interp_params\n\nip_conf = interp_config()\nip_repo = InterpolationParameterRepository(ip_conf)\n\nregion_model.interpolation_parameter = ip_repo.get_parameters(0) #just a '0' for now",
"The next step is to set the intial states of the model using our last repository. This one, the GeneratedStateRepository will set empty default values.\nNow we are nearly ready to conduct a simulation. We just need to run a few methods to prepare the model and cells for the simulation. The region_model has a method called initalize_cell_environment that takes a time_axis type as input. We defined the time_axis above, so now we'll use it to initialize the model. At the same time, we'll set the initial_state. Then we can actually run a simulation!",
"from shyft.repository.generated_state_repository import GeneratedStateRepository\n\ninit_values = {'gs': {'acc_melt': 0.0,\n 'albedo': 0.65,\n 'alpha': 6.25,\n 'iso_pot_energy': 0.0,\n 'lwc': 0.1,\n 'sdc_melt_mean': 0.0,\n 'surface_heat': 30000.0,\n 'temp_swe': 0.0},\n 'kirchner': {'q': 0.01}}\n\n \nstate_generator = GeneratedStateRepository(region_model)#, init_values=init_values)\n\n# we need the state_repository to have the same size as the model\n#state_repo.n = region_model.size()\n# there is only 1 state (indexed '0')\ns0 = state_generator.get_state(0)\nnot_applied_list=region_model.state.apply_state( # apply state set the current state according to arguments\n cell_id_state_vector=s0, # ok, easy to get\n cids=[] # empty means apply all, if we wanted to only apply state for specific catchment-ids, this is where to put them\n)\nassert len(not_applied_list)==0, 'Ensure all states was matched and applied to the model'\nregion_model.initial_state=region_model.current_state # now we stash the current state to the initial state",
"Conduct the simulation\nWe now have a region_model that is ready for simulation. As we discussed before, we still need to get the data from our point observations interpolated to the cells, and we need to get the env_ts of each cell populated. But all the machinery is now in place to make this happen. \nTo summarize, we've created:\n\nregion_repo, a region repository that contains information related to region of simulation and the model to be used in the simulation. From this we get a region_model\ngeots_repo, a geo-timeseries repository that provides a mechanism to pull the data we require from our 'sources'.\ntime_axis, created from the TimeAxisFixedDeltaT class of shyft to provide the period of simulation.\nip_repo, an interpolation repository which provides all the required parameters for interpolating our data to the distributed cells -- following variable specific protocols/models.\nstate_repo, a GeneratedStateRepository used to provide our simulation an initial state.\n\nThe next step is simply to initialize the cell environment and run the interpolation. As a practive, before simulation we reset to the initial state (we're there already, but it is something you have to do before a new simulation), and then run the cells. First we'll initialize the cell environment:",
"region_model.initialize_cell_environment(ta_1)",
"As a habit, we have a quick \"sanity check\" function to see if the model is runnable. Itis recommended to have this function when you create 'run scripts'.",
"def runnable(reg_mod):\n \"\"\" returns True if model is properly configured \n **note** this is specific depending on your model's input data requirements \"\"\"\n return all((reg_mod.initial_state.size() > 0, reg_mod.time_axis.size() > 0,\n all([len(getattr(reg_mod.region_env, attr)) > 0 for attr in\n (\"temperature\", \"wind_speed\", \"precipitation\", \"rel_hum\", \"radiation\")])))\n\n# run the model, e.g. as you may configure it in a script:\nif runnable(region_model):\n \n region_model.interpolate(region_model.interpolation_parameter, region_model.region_env)\n region_model.revert_to_initial_state()\n region_model.run_cells()\nelse:\n print('Something wrong with model configuration.')\n\n\n ",
"Okay, so the simulation was run. Now we may be interested in looking at some of the output. We'll take a brief summary glance in the next section, and save a deeper dive into the simulation results for another notebook.\n3. Simulation results\nThe first step will be simply to look at the discharge results for each subcatchment within our simulation domain. For simplicity, we can use a pandas.DataFrame to collect the data from each catchment.",
"# Here we are going to extact data from the simulation.\n# We start by creating a list to hold discharge for each of the subcatchments.\n# Then we'll get the data from the region_model object\n\n# mapping of internal catch ID to catchment\ncatchment_id_map = region_model.catchment_id_map \n\n# First get the time-axis which we'll use as the index for the data frame\nta = region_model.time_axis\n# and convert it to datetimes\nindex = [dt.datetime.utcfromtimestamp(p.start) for p in ta]\n\n# Now we'll add all the discharge series for each catchment \ndata = {}\nfor cid in catchment_id_map:\n # get the discharge time series for the subcatchment\n q_ts = region_model.statistics.discharge([int(cid)])\n data[cid] = q_ts.values.to_numpy()\n\ndf = pd.DataFrame(data, index=index)\n# we can simply use:\nax = df.plot(figsize=(20,15))\nax.legend(title=\"Catch. ID\")\nax.set_ylabel(\"discharge [m3 s-1]\")",
"Okay, that was simple. Let's look at the timeseries in some individual cells. The following is a bit of a contrived example, but it shows some aspects of the api. We'll plot the temperature series of all the cells in one sub-catchment, and color them by elevation. This doesn't necessarily show anything about the simulation, per se, but rather results from the interpolation step.",
"from matplotlib.cm import jet as jet\nfrom matplotlib.colors import Normalize\n\n# get all the cells for one sub-catchment with 'id' == 1228\nc1228 = [c for c in region_model.cells if c.geo.catchment_id() == 1228]\n\n# for plotting, create an mpl normalizer based on min,max elevation\nelv = [c.geo.mid_point().z for c in c1228]\nnorm = Normalize(min(elv), max(elv))\n\n#plot with line color a function of elevation\nfig, ax = plt.subplots(figsize=(15,10))\n\n# here we are cycling through each of the cells in c1228\nfor dat,elv in zip([c.env_ts.temperature.values for c in c1228], [c.mid_point().z for c in c1228]):\n ax.plot(dat, color=jet(norm(elv)), label=int(elv))\n \n \n# the following is just to plot the legend entries and not related to Shyft\nhandles, labels = ax.get_legend_handles_labels()\n\n# sort by labels\nimport operator\nhl = sorted(zip(handles, labels),\n key=operator.itemgetter(1))\nhandles2, labels2 = zip(*hl)\n\n# show legend, but only every fifth entry\nax.legend(handles2[::5], labels2[::5], title='Elevation [m]')",
"As we would expect from the temperature kriging method, we should find higher elevations have colder temperatures. As an exercise you could explore this relationship using a scatter plot.\nNow we're going to create a function that will read initial states from the initial_state_repo. In practice, this is already done by the ConfgiSimulator, but to demonstrate lower level functions, we'll reset the states of our region_model:",
"state_generator.find_state?\n\n# create a function to reaad the states from the state repository\ndef get_init_state_from_repo(initial_state_repo_, region_model_id_=None, timestamp=None):\n state_id = 0\n if hasattr(initial_state_repo_, 'n'): # No stored state, generated on-the-fly\n initial_state_repo_.n = region_model.size()\n else:\n states = initial_state_repo_.find_state(\n region_model_id_criteria=region_model_id_,\n utc_timestamp_criteria=timestamp)\n if len(states) > 0:\n state_id = states[0].state_id # most_recent_state i.e. <= start time\n else:\n raise Exception('No initial state matching criteria.')\n return initial_state_repo_.get_state(state_id)\n \ninit_state = get_init_state_from_repo(state_generator, timestamp=region_model.time_axis.start)\n",
"Don't worry too much about the function for now, but do take note of the init_state object that we created. This is another container, this time it is a class that contains PTGSKStateWithId objects, which are specific to the model stack implemented in the simulation (in this case PTGSK). If we explore an individual state object, we'll see init_state contains, for each cell in our simulation, the state variables for each 'method' of the method stack.\nLet's look more closely:",
"def print_pub_attr(obj):\n #only public attributes\n print(f'{obj.__class__.__name__}:\\t',[attr for attr in dir(obj) if attr[0] is not '_']) \n \nprint(len(init_state))\ninit_state_cell0 = init_state[0] \n# the identifier\nprint_pub_attr(init_state_cell0.id)\n# gam snow states\nprint_pub_attr(init_state_cell0.state.gs)\n\n#init_state_cell0.kirchner states\nprint_pub_attr(init_state_cell0.state.kirchner)\n",
"Summary\nWe have now explored the region_model and looked at how to instantiate a region_model by using a api.ARegionEnvironment, containing a collection of timeseries sources, and passing an api.InterpolationParameter class containing the parameters to use for the data interpolation algorithms. The interpolation step \"populated\" our cells with data from the point sources.\nThe cells each contain all the information related to the simulation (their own timeseries, env_ts; their own model parameters, parameter; and other attributes and methods). In future tutorials we'll work with the cells indivdual \"resource collector\" (.rc) and \"state collector\" (.sc) attributes."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
IST256/learn-python | content/lessons/04-Iterations/LAB-Iterations.ipynb | mit | [
"Class Coding Lab: Iterations\nThe goals of this lab are to help you to understand:\n\nHow loops work.\nThe difference between definite and indefinite loops, and when to use each.\nHow to build an indefinite loop with complex exit conditions.\nHow to create a program from a complex idea.\n\nUnderstanding Iterations\nIterations permit us to repeat code until a Boolean expression is False. Iterations or loops allow us to write succinct, compact code. Here's an example, which counts to 3 before Blitzing the Quarterback in backyard American Football:",
"i = 1\nwhile i <= 3:\n print(i,\"Mississippi...\")\n i=i+1\nprint(\"Blitz!\")",
"Breaking it down...\nThe while statement on line 2 starts the loop. The code indented beneath the while (lines 3-4) will repeat, in a linear fashion until the Boolean expression on line 2 i <= 3 is False, at which time the program continues with line 5.\nSome Terminology\nWe call i <=3 the loop's exit condition. The variable i inside the exit condition is the only thing that we can change to make the exit condition False, therefore it is the loop control variable. On line 4 we change the loop control variable by adding one to it, this is called an increment.\nFurthermore, we know how many times this loop will execute before it actually runs: 3. Even if we allowed the user to enter a number, and looped that many times, we would still know. We call this a definite loop. Whenever we iterate over a fixed number of values, regardless of whether those values are determined at run-time or not, we're using a definite loop.\nIf the loop control variable never forces the exit condition to be False, we have an infinite loop. As the name implies, an Infinite loop never ends and typically causes our computer to crash or lock up.",
"## WARNING!!! INFINITE LOOP AHEAD\n## IF YOU RUN THIS CODE YOU WILL NEED TO STOP OR RESTART THE KERNEL AFTER RUNNING THIS!!!\n\ni = 1\nwhile i <= 3:\n print(i,\"Mississippi...\")\nprint(\"Blitz!\")",
"For loops\nTo prevent an infinite loop when the loop is definite, we use the for statement. Here's the same program using for:",
"for i in range(1,4):\n print(i,\"Mississippi...\")\nprint(\"Blitz!\")",
"One confusing aspect of this loop is range(1,4) why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3.\n1.1 You Code\nIn the space below, Re-Write the above program to count Mississippi from 10 to 15. You need practice writing loops, so make sure you do NOT copy the code.\nNote: How many times will that loop?",
"# TODO Write code here\n",
"Indefinite loops\nWith indefinite loops we do not know how many times the program will execute. This is typically based on user action, and therefore our loop is subject to the whims of whoever interacts with it. Most applications like spreadsheets, photo editors, and games use indefinite loops. They'll run on your computer, seemingly forever, until you choose to quit the application. \nThe classic indefinite loop pattern involves getting input from the user inside the loop. We then inspect the input and based on that input we might exit the loop. Here's an example:",
"name = \"\"\nwhile name != 'mike':\n name = input(\"Say my name! : \")\n print(f\"Nope, my name is not {name}!\")",
"In the above example, the loop will keep on looping until we enter mike. The value mike is called the sentinal value - a value we look out for, and when it exists we stop the loop. For this reason indefinite loops are also known as sentinal-controlled loops.\nThe classic problem with indefinite/sentinal controlled loops is that its really difficult to get the application's logic to line up with the exit condition. For example we need to set name = \"\" in line 1 so that line 2 starts out as True. Also we have this wonky logic where when we say 'mike' it still prints Nope, my name is not mike! before exiting.\nBreak statement\nThe solution to this problem is to use the break statement. break tells Python to exit the loop immediately. We then re-structure all of our indefinite loops to look like this:\nwhile True:\n if sentinel-controlled-exit-condition:\n break\nHere's our program we-written with the break statement. This is the recommended way to write indefinite loops in this course.\nNOTE: We always check for the sentinal value immediately AFTER the input() function.",
"while True:\n name = input(\"Say my name!: \")\n if name == 'mike':\n break\n print(\"Nope, my name is not %s!\" %(name))",
"1.2 You Code: Debug This loop\nThis program should count the number of times you input the value ni. As soon as you enter a value other than ni the program will stop looping and print the count of ni's.\nExample Run:\nWhat say you? ni\nWhat say you? ni\nWhat say you? ni\nWhat say you? nay\nYou said 'ni' 3 times.\n\nThe problem of course, is this code wasn't written correctly. Its up to you to get it working!",
"#TODO Debug this code\nnicount=0\nwhile True:\n say = input \"What say you? \")\n if say == 'ni':\n break\n nicount = 1\nprint(f\"You said 'ni' P {nicount} times.\") ",
"Multiple exit conditions\nThis indefinite loop pattern makes it easy to add additional exit conditions. For example, here's the program again, but it now stops when you say my name or type in 3 wrong names. \nMake sure to run this program a couple of times to understand what is happening:\n\nFirst enter mike to exit the program, \nNext enter the wrong name 3 times.",
"times = 0\nwhile True:\n name = input(\"Say my name!: \")\n times = times + 1\n if name == 'mike': # sentinal 1\n print(\"You got it!\")\n break\n if times == 3: # sentinal 2\n print(\"Game over. Too many tries!\")\n break\n print(f\"Nope, my name is not {name}\")",
"Counting Characters in Text\nLet's conclude the lab with you writing your own program that uses both definite and indefinite loops. This program should input some text and then a character, counting the number of characters in the text. This process will repeat until the text entered is empty. \nThe program should work as follows. Example run:\nEnter a text, or press ENTER quit: mississippi\nWhich character are you searching for? i\nThere are 4 i's in mississippi\n\nEnter a text, or press ENTER quit: port-au-prince\nWhich character are you searching for? -\nThere are 4 -'s in port-au-prince\n\nEnter a text, or press ENTER quit:\nGoodbye!\n\nThis seems complicated, so let's break the problem up using the problem simplification approach.\nFirst write code to count the numbers of characters in any text. Here is the algorithm:\nset count to 0\ninput the text\ninput the search character\nfor ch in text\n if ch equals the search character\n increment the count\nprint there are {count} {search characters} in {text}\n\n1.3 You Code\nImplement the algorithm above in code in the cell below.",
"# TODO Write code here\n",
"Next, we surround the code we wrote in 1.4 with a sentinal-controlled indefinite loop. The sentinal (the part that exits the loop is when the text is empty (text==\"\") The algorithm is:\nloop\n set count to 0\n input the text\n if text is empty quit loop\n input the search character\n for ch in text\n if ch equals the search character\n increment the count\n print there are {count} {search characters} in {text}\n\n1.4 You Code\nImplement the algorithm above in code.",
"# TODO Write Code here:\n",
"Metacognition\nRate your comfort level with this week's material so far.\n1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.\n2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below. \n3 ==> I can do this on my own without any help. \n4 ==> I can do this on my own and can explain/teach how to do it to others.\n--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==-- \nQuestions And Comments\nRecord any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.\n--== Double-click Here then Enter Your Questions Below this Line ==--",
"# run this code to turn in your work!\nfrom coursetools.submission import Submission\nSubmission().submit()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
minesh1291/Practicing-Kaggle | zillow2017/H2Opy_v0.ipynb | gpl-3.0 | [
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#import-Packages\" data-toc-modified-id=\"import-Packages-1\"><span class=\"toc-item-num\">1 </span>import Packages</a></div><div class=\"lev2 toc-item\"><a href=\"#H2O-init\" data-toc-modified-id=\"H2O-init-11\"><span class=\"toc-item-num\">1.1 </span>H2O init</a></div><div class=\"lev2 toc-item\"><a href=\"#import-xy_train,-x_test\" data-toc-modified-id=\"import-xy_train,-x_test-12\"><span class=\"toc-item-num\">1.2 </span>import xy_train, x_test</a></div><div class=\"lev2 toc-item\"><a href=\"#27-AUG-2017-dl_model\" data-toc-modified-id=\"27-AUG-2017-dl_model-13\"><span class=\"toc-item-num\">1.3 </span>27-AUG-2017 dl_model</a></div><div class=\"lev3 toc-item\"><a href=\"#Model-Details\" data-toc-modified-id=\"Model-Details-131\"><span class=\"toc-item-num\">1.3.1 </span>Model Details</a></div><div class=\"lev2 toc-item\"><a href=\"#28-AUG-2017-dl_model_list-1\" data-toc-modified-id=\"28-AUG-2017-dl_model_list-1-14\"><span class=\"toc-item-num\">1.4 </span>28-AUG-2017 dl_model_list 1</a></div><div class=\"lev3 toc-item\"><a href=\"#split-the-data-3-ways:\" data-toc-modified-id=\"split-the-data-3-ways:-141\"><span class=\"toc-item-num\">1.4.1 </span>split the data 3 ways:</a></div><div class=\"lev3 toc-item\"><a href=\"#desicion\" data-toc-modified-id=\"desicion-142\"><span class=\"toc-item-num\">1.4.2 </span>desicion</a></div><div class=\"lev2 toc-item\"><a href=\"#28-AUG-2017-dl_model_list-2\" data-toc-modified-id=\"28-AUG-2017-dl_model_list-2-15\"><span class=\"toc-item-num\">1.5 </span>28-AUG-2017 dl_model_list 2</a></div><div class=\"lev2 toc-item\"><a href=\"#28-AUG-2017-dl_model_list-3\" data-toc-modified-id=\"28-AUG-2017-dl_model_list-3-16\"><span class=\"toc-item-num\">1.6 </span>28-AUG-2017 dl_model_list 3</a></div><div class=\"lev3 toc-item\"><a href=\"#30,40-nurons,-4,5-layers\" data-toc-modified-id=\"30,40-nurons,-4,5-layers-161\"><span class=\"toc-item-num\">1.6.1 </span>30,40 nurons, 4,5 layers</a></div><div class=\"lev3 toc-item\"><a href=\"#tests\" data-toc-modified-id=\"tests-162\"><span class=\"toc-item-num\">1.6.2 </span>tests</a></div><div class=\"lev2 toc-item\"><a href=\"#Predict-test_h2o-&-combine\" data-toc-modified-id=\"Predict-test_h2o-&-combine-17\"><span class=\"toc-item-num\">1.7 </span>Predict test_h2o & combine</a></div><div class=\"lev2 toc-item\"><a href=\"#Predict-x_test-&-combine\" data-toc-modified-id=\"Predict-x_test-&-combine-18\"><span class=\"toc-item-num\">1.8 </span>Predict x_test & combine</a></div>\n\n# import Packages",
"import h2o\nimport time,os\n\n%matplotlib inline \n#IMPORT ALL THE THINGS\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom h2o.estimators.deeplearning import H2OAutoEncoderEstimator, H2ODeepLearningEstimator\nfrom h2o.estimators.gbm import H2OGradientBoostingEstimator\nfrom h2o.estimators.glm import H2OGeneralizedLinearEstimator\nfrom h2o.estimators.random_forest import H2ORandomForestEstimator",
"H2O init",
"h2o.init(max_mem_size = 20) #uses all cores by default\nh2o.remove_all()",
"import xy_train, x_test",
"xy_tr = h2o.import_file(path = os.path.realpath(\"../daielee/xy_tr.csv\"))\nx_test = h2o.import_file(path = os.path.realpath(\"../daielee/x_test.csv\"))\n\nxy_tr_df = xy_tr.as_data_frame(use_pandas=True)\nx_test_df = x_test.as_data_frame(use_pandas=True)\n\nprint (xy_tr_df.shape,x_test_df.shapepe)",
"27-AUG-2017 dl_model\nModel Details\n\ndl_model = H2ODeepLearningEstimator(epochs=1000)\ndl_model.train(X, y, xy_tr)\n\n=============\n* H2ODeepLearningEstimator : Deep Learning\n* Model Key: DeepLearning_model_python_1503841734286_1\n\nModelMetricsRegression: deeplearning\n\n Reported on train data. \n\n\nMSE: 0.02257823450695032\n\nRMSE: 0.15026055539279204\nMAE: 0.06853673758752012\nRMSLE: NaN\nMean Residual Deviance: 0.02257823450695032",
"X = xy_tr.col_names[0:57]\ny = xy_tr.col_names[57]\ndl_model = H2ODeepLearningEstimator(epochs=1000)\ndl_model.train(X, y, xy_tr)\n\ndl_model.summary\n\nsh = dl_model.score_history()\nsh = pd.DataFrame(sh)\nprint(sh.columns)\n\nsh.plot(x='epochs',y = ['training_deviance', 'training_mae'])\n\ndl_model.default_params\n\n dl_model.model_performance(test_data=xy_tr)\n\npd.DataFrame(dl_model.varimp())\n\ny_test = dl_model.predict(test_data=x_test)\n\nprint(y_test.shape)",
"28-AUG-2017 dl_model_list 1",
"nuron_cnts = [40,80,160]\nlayer_cnts = [1,2,3,4,5]\nacts = [\"Tanh\",\"Maxout\",\"Rectifier\",\"RectifierWithDropout\"]\nmodels_list = []\nm_names_list = []\ni = 0\n# N 3 * L 5 * A 4 = 60n \nfor act in acts:\n for layer_cnt in layer_cnts:\n for nuron_cnt in nuron_cnts:\n m_names_list.append(\"N:\"+str(nuron_cnt)+\"L:\"+str(layer_cnt)+\"A:\"+act)\n print(m_names_list[i])\n models_list.append(H2ODeepLearningEstimator(\n model_id=m_names_list[i],\n hidden=[nuron_cnt]*layer_cnt, # more hidden layers -> more complex interactions\n activation = act,\n epochs=10, # to keep it short enough\n score_validation_samples=10000,\n overwrite_with_best_model=True,\n adaptive_rate=True,\n l1=0.00001, # add some L1/L2 regularization\n l2=0.00001,\n max_w2=10.0 # helps stability for Rectifier\n ))\n \n models_list[i].train(x=X,y=y,training_frame=xy_tr,\n validation_frame=xy_tr)\n i+=1\n\nfor i in range(0,639): #range(len(models_list)-1):\n try:\n sh = models_list[i].score_history()\n sh = pd.DataFrame(sh)\n perform = sh['validation_deviance'].tolist()[-1]\n print(models_list[i].model_id,end=\" \")\n print(perform)\n except:\n print(end=\"\")",
"split the data 3 ways:\n\n60% for training \n20% for validation (hyper parameter tuning) \n\n20% for final testing \n\n\nWe will train a data set on one set and use the others to test the validity of the model by ensuring that it can predict accurately on data the model has not been shown. \n\nThe second set will be used for validation most of the time. \nThe third set will be withheld until the end, to ensure that our validation accuracy is consistent with data we have never seen during the iterative process. \n\ndesicion\nUse Rect-dropout",
"train_h2o, valid_h2o, test_h2o = xy_tr.split_frame([0.6, 0.2], seed=1234)",
"28-AUG-2017 dl_model_list 2",
"nuron_cnts = [40,80,160]\nlayer_cnts = [1,2,3,4,5]\nacts = [\"RectifierWithDropout\"] #\"Tanh\",\"Maxout\",\"Rectifier\",\nmodels_list = []\nm_names_list = []\ntime_tkn_wall =[]\ntime_tkn_clk=[]\ni = 0\n# N 3 * L 5 * A 1 = 15n \nfor act in acts:\n for layer_cnt in layer_cnts:\n for nuron_cnt in nuron_cnts:\n m_names_list.append(\"N: \"+str(nuron_cnt)+\" L: \"+str(layer_cnt)+\" A: \"+act)\n print(m_names_list[i])\n models_list.append(H2ODeepLearningEstimator(\n model_id=m_names_list[i],\n hidden=[nuron_cnt]*layer_cnt, # more hidden layers -> more complex interactions\n activation = act,\n epochs=10, # to keep it short enough\n score_validation_samples=10000,\n overwrite_with_best_model=True,\n adaptive_rate=True,\n l1=0.00001, # add some L1/L2 regularization\n l2=0.00001,\n max_w2=10.0 # helps stability for Rectifier\n ))\n str_time_clk = time.clock()\n str_time_wall = time.time()\n \n models_list[i].train(x=X,y=y,training_frame=train,\n validation_frame=valid)\n time_tkn_clk.append(time.clock()-str_time_clk)\n time_tkn_wall.append(time.time()-str_time_wall)\n \n i+=1",
"time.time() shows that the wall-clock time has passed approximately one second while time.clock() shows the CPU time spent on the current process is less than 1 microsecond. time.clock() has a much higher precision than time.time().",
"for i in range(len(models_list)-1):\n try:\n sh = models_list[i].score_history()\n sh = pd.DataFrame(sh)\n perform = sh['validation_deviance'].tolist()[-1]\n print(models_list[i].model_id,end=\" \")\n print(\" clk \"+str(time_tkn_clk[i])+\" wall \"+str(time_tkn_wall[i]),end=\" \")\n print(perform)\n except:\n print(end=\"\")",
"28-AUG-2017 dl_model_list 3\n30,40 nurons, 4,5 layers",
"nuron_cnts = [30,40,50]\nlayer_cnts = [4,5]\nacts = [\"RectifierWithDropout\"] #\"Tanh\",\"Maxout\",\"Rectifier\",\ndout=0.5\nmodels_list = []\nm_names_list = []\ntime_tkn_wall =[]\ntime_tkn_clk=[]\n\ni = 0\n# N 1 * L 10 * A 1 = 10n \nfor act in acts:\n for layer_cnt in layer_cnts:\n for nuron_cnt in nuron_cnts:\n m_names_list.append(\"N: \"+str(nuron_cnt)+\" L: \"+str(layer_cnt)+\" A: \"+act)\n print(m_names_list[i])\n models_list.append(H2ODeepLearningEstimator(\n model_id=m_names_list[i],\n hidden=[nuron_cnt]*layer_cnt, # more hidden layers -> more complex interactions\n hidden_dropout_ratios=[dout]*layer_cnt,\n activation = act,\n epochs=500, # to keep it short enough\n train_samples_per_iteration=300,\n score_validation_samples=10000,\n loss=\"absolute\",\n overwrite_with_best_model=True,\n adaptive_rate=True,\n l1=0.00001, # add some L1/L2 regularization\n l2=0.0001,\n max_w2=10.0, # helps stability for Rectifier\n variable_importances=True\n ))\n str_time_clk = time.clock()\n str_time_wall = time.time()\n \n models_list[i].train(x=X,y=y,training_frame=train,\n validation_frame=valid)\n time_tkn_clk.append(time.clock()-str_time_clk)\n time_tkn_wall.append(time.time()-str_time_wall)\n \n i+=1",
"tests",
"dl_pref=dl_model.model_performance(test_data=test) \n\ndl_model.mean\n\n dl_pref.mae()\n\ntrain.shape\nmodels_list[0].model_id\n\nfor i in range(len(models_list)):\n try:\n sh = models_list[i].score_history()\n sh = pd.DataFrame(sh)\n sh.plot(x='epochs',y = ['training_mae', 'validation_mae'])\n tr_perform = sh['training_mae'].tolist()[-1]\n val_perform = sh['validation_mae'].tolist()[-1]\n ts_perform= models_list[i].model_performance(test_data=test).mae() \n print(models_list[i].model_id,end=\" \")\n print(\"clk \"+str(round(time_tkn_clk[i],2))+\"\\twall \"+str(round(time_tkn_wall[i]/60,2)),end=\"\\t\")\n print(\n \"tr \" + str(round(tr_perform,6)) +\"\\tval \" + str(round(val_perform,6)) + \"\\tts \" + str(round(ts_perform,6))\n )\n except:\n print(end=\"\")",
"Predict test_h2o & combine\nPredict x_test & combine",
"import numpy as np\nimport pandas as pd\nimport xgboost as xgb\nfrom sklearn.preprocessing import LabelEncoder\nimport lightgbm as lgb\nimport gc\nfrom sklearn.linear_model import LinearRegression\nimport random\nimport datetime as dt\n\nnp.random.seed(17)\nrandom.seed(17)\n\ntrain = pd.read_csv(\"../input/train_2016_v2.csv\", parse_dates=[\"transactiondate\"])\nproperties = pd.read_csv(\"../input/properties_2016.csv\")\nsubmission = pd.read_csv(\"../input/sample_submission.csv\")\nprint(len(train),len(properties),len(submission))\n\n\ndef get_features(df):\n df[\"transactiondate\"] = pd.to_datetime(df[\"transactiondate\"])\n df[\"transactiondate_year\"] = df[\"transactiondate\"].dt.year\n df[\"transactiondate_month\"] = df[\"transactiondate\"].dt.month\n df['transactiondate'] = df['transactiondate'].dt.quarter\n df = df.fillna(-1.0)\n return df\n\ndef MAE(y, ypred):\n #logerror=log(Zestimate)−log(SalePrice)\n return np.sum([abs(y[i]-ypred[i]) for i in range(len(y))]) / len(y)\n\n\ntrain = pd.merge(train, properties, how='left', on='parcelid')\ny = train['logerror'].values\ntest = pd.merge(submission, properties, how='left', left_on='ParcelId', right_on='parcelid')\nproperties = [] #memory\n\nexc = [train.columns[c] for c in range(len(train.columns)) if train.dtypes[c] == 'O'] + ['logerror','parcelid']\ncol = [c for c in train.columns if c not in exc]\n\n\ntrain = get_features(train[col])\ntest['transactiondate'] = '2016-01-01' #should use the most common training date\ntest = get_features(test[col])\n\n\nreg = LinearRegression(n_jobs=-1)\nreg.fit(train, y); print('fit...')\nprint(MAE(y, reg.predict(train)))\ntrain = []; y = [] #memory\n\ntest_dates = ['2016-10-01','2016-11-01','2016-12-01','2017-10-01','2017-11-01','2017-12-01']\ntest_columns = ['201610','201611','201612','201710','201711','201712']\n\n\npred0 = models_list[1].predict(test_data=x_test).as_data_frame(use_pandas=True)\n\npred0.head(n=5)\n\nOLS_WEIGHT = 0.0856\n\n\nprint( \"\\nPredicting with OLS and combining with XGB/LGB/baseline predicitons: ...\" )\nfor i in range(len(test_dates)):\n test['transactiondate'] = test_dates[i]\n pred = OLS_WEIGHT * reg.predict(get_features(test)) + (1-OLS_WEIGHT)*pred0.values[:,0]\n submission[test_columns[i]] = [float(format(x, '.4f')) for x in pred]\n print('predict...', i)\n\nprint( \"\\nCombined XGB/LGB/baseline/OLS predictions:\" )\nprint( submission.head() )\n\n\nfrom datetime import datetime \nsubmission.to_csv('sub{}.csv'.format(datetime.now().strftime('%Y%m%d_%H%M%S')), index=False)\n\n\nh2o.model.regression.h2o_mean_absolute_error(y_actual=,y_predicted=)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/zh-cn/guide/keras/custom_callback.ipynb | apache-2.0 | [
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"编写自己的回调函数\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://tensorflow.google.cn/guide/keras/custom_callback\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看</a> </td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_callback.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_callback.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 上查看源代码</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/custom_callback.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载笔记本</a> </td>\n</table>\n\n简介\n回调是一种可以在训练、评估或推断过程中自定义 Keras 模型行为的强大工具。示例包括使用 TensorBoard 来呈现训练进度和结果的 tf.keras.callbacks.TensorBoard,以及用来在训练期间定期保存模型的 tf.keras.callbacks.ModelCheckpoint。\n在本指南中,您将了解什么是 Keras 回调函数,它可以做什么,以及如何构建自己的回调函数。我们提供了一些简单回调函数应用的演示,以帮助您入门。\n设置",
"import tensorflow as tf\nfrom tensorflow import keras",
"Keras 回调函数概述\n所有回调函数都将 keras.callbacks.Callback 类作为子类,并重写在训练、测试和预测的各个阶段调用的一组方法。回调函数对于在训练期间了解模型的内部状态和统计信息十分有用。\n您可以将回调函数的列表(作为关键字参数 callbacks)传递给以下模型方法:\n\nkeras.Model.fit()\nkeras.Model.evaluate()\nkeras.Model.predict()\n\n回调函数方法概述\n全局方法\non_(train|test|predict)_begin(self, logs=None)\n在 fit/evaluate/predict 开始时调用。\non_(train|test|predict)_end(self, logs=None)\n在 fit/evaluate/predict 结束时调用。\nBatch-level methods for training/testing/predicting\non_(train|test|predict)_batch_begin(self, batch, logs=None)\n正好在训练/测试/预测期间处理批次之前调用。\non_(train|test|predict)_batch_end(self, batch, logs=None)\n在训练/测试/预测批次结束时调用。在此方法中,logs 是包含指标结果的字典。\n周期级方法(仅训练)\non_epoch_begin(self, epoch, logs=None)\n在训练期间周期开始时调用。\non_epoch_end(self, epoch, logs=None)\n在训练期间周期开始时调用。\n基本示例\n让我们来看一个具体的例子。首先,导入 Tensorflow 并定义一个简单的序列式 Keras 模型:",
"# Define the Keras model to add callbacks to\ndef get_model():\n model = keras.Sequential()\n model.add(keras.layers.Dense(1, input_dim=784))\n model.compile(\n optimizer=keras.optimizers.RMSprop(learning_rate=0.1),\n loss=\"mean_squared_error\",\n metrics=[\"mean_absolute_error\"],\n )\n return model\n",
"然后,从 Keras 数据集 API 加载 MNIST 数据进行训练和测试:",
"# Load example MNIST data and pre-process it\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\nx_train = x_train.reshape(-1, 784).astype(\"float32\") / 255.0\nx_test = x_test.reshape(-1, 784).astype(\"float32\") / 255.0\n\n# Limit the data to 1000 samples\nx_train = x_train[:1000]\ny_train = y_train[:1000]\nx_test = x_test[:1000]\ny_test = y_test[:1000]",
"接下来,定义一个简单的自定义回调函数来记录以下内容:\n\nfit/evaluate/predict 开始和结束的时间\n每个周期开始和结束的时间\n每个训练批次开始和结束的时间\n每个评估(测试)批次开始和结束的时间\n每次推断(预测)批次开始和结束的时间",
"class CustomCallback(keras.callbacks.Callback):\n def on_train_begin(self, logs=None):\n keys = list(logs.keys())\n print(\"Starting training; got log keys: {}\".format(keys))\n\n def on_train_end(self, logs=None):\n keys = list(logs.keys())\n print(\"Stop training; got log keys: {}\".format(keys))\n\n def on_epoch_begin(self, epoch, logs=None):\n keys = list(logs.keys())\n print(\"Start epoch {} of training; got log keys: {}\".format(epoch, keys))\n\n def on_epoch_end(self, epoch, logs=None):\n keys = list(logs.keys())\n print(\"End epoch {} of training; got log keys: {}\".format(epoch, keys))\n\n def on_test_begin(self, logs=None):\n keys = list(logs.keys())\n print(\"Start testing; got log keys: {}\".format(keys))\n\n def on_test_end(self, logs=None):\n keys = list(logs.keys())\n print(\"Stop testing; got log keys: {}\".format(keys))\n\n def on_predict_begin(self, logs=None):\n keys = list(logs.keys())\n print(\"Start predicting; got log keys: {}\".format(keys))\n\n def on_predict_end(self, logs=None):\n keys = list(logs.keys())\n print(\"Stop predicting; got log keys: {}\".format(keys))\n\n def on_train_batch_begin(self, batch, logs=None):\n keys = list(logs.keys())\n print(\"...Training: start of batch {}; got log keys: {}\".format(batch, keys))\n\n def on_train_batch_end(self, batch, logs=None):\n keys = list(logs.keys())\n print(\"...Training: end of batch {}; got log keys: {}\".format(batch, keys))\n\n def on_test_batch_begin(self, batch, logs=None):\n keys = list(logs.keys())\n print(\"...Evaluating: start of batch {}; got log keys: {}\".format(batch, keys))\n\n def on_test_batch_end(self, batch, logs=None):\n keys = list(logs.keys())\n print(\"...Evaluating: end of batch {}; got log keys: {}\".format(batch, keys))\n\n def on_predict_batch_begin(self, batch, logs=None):\n keys = list(logs.keys())\n print(\"...Predicting: start of batch {}; got log keys: {}\".format(batch, keys))\n\n def on_predict_batch_end(self, batch, logs=None):\n keys = list(logs.keys())\n print(\"...Predicting: end of batch {}; got log keys: {}\".format(batch, keys))\n",
"我们来试一下:",
"model = get_model()\nmodel.fit(\n x_train,\n y_train,\n batch_size=128,\n epochs=1,\n verbose=0,\n validation_split=0.5,\n callbacks=[CustomCallback()],\n)\n\nres = model.evaluate(\n x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]\n)\n\nres = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()])",
"logs 字典的用法\nlogs 字典包含损失值,以及批次或周期结束时的所有指标。示例包括损失和平均绝对误差。",
"class LossAndErrorPrintingCallback(keras.callbacks.Callback):\n def on_train_batch_end(self, batch, logs=None):\n print(\n \"Up to batch {}, the average loss is {:7.2f}.\".format(batch, logs[\"loss\"])\n )\n\n def on_test_batch_end(self, batch, logs=None):\n print(\n \"Up to batch {}, the average loss is {:7.2f}.\".format(batch, logs[\"loss\"])\n )\n\n def on_epoch_end(self, epoch, logs=None):\n print(\n \"The average loss for epoch {} is {:7.2f} \"\n \"and mean absolute error is {:7.2f}.\".format(\n epoch, logs[\"loss\"], logs[\"mean_absolute_error\"]\n )\n )\n\n\nmodel = get_model()\nmodel.fit(\n x_train,\n y_train,\n batch_size=128,\n epochs=2,\n verbose=0,\n callbacks=[LossAndErrorPrintingCallback()],\n)\n\nres = model.evaluate(\n x_test,\n y_test,\n batch_size=128,\n verbose=0,\n callbacks=[LossAndErrorPrintingCallback()],\n)",
"self.model 属性的用法\n除了在调用其中一种方法时接收日志信息外,回调还可以访问与当前一轮训练/评估/推断有关的模型:self.model。\n以下是您可以在回调函数中使用 self.model 进行的一些操作:\n\n设置 self.model.stop_training = True 以立即中断训练。\n转变优化器(可作为 self.model.optimizer)的超参数,例如 self.model.optimizer.learning_rate。\n定期保存模型。\n在每个周期结束时,在少量测试样本上记录 model.predict() 的输出,以用作训练期间的健全性检查。\n在每个周期结束时提取中间特征的可视化,随时间推移监视模型当前的学习内容。\n其他\n\n下面我们通过几个示例来看看它是如何工作的。\nKeras 回调函数应用示例\n在达到最小损失时尽早停止\n第一个示例展示了如何通过设置 self.model.stop_training(布尔)属性来创建能够在达到最小损失时停止训练的 Callback。您还可以提供参数 patience 来指定在达到局部最小值后应该等待多少个周期然后停止。\ntf.keras.callbacks.EarlyStopping 提供了一种更完整、更通用的实现。",
"import numpy as np\n\n\nclass EarlyStoppingAtMinLoss(keras.callbacks.Callback):\n \"\"\"Stop training when the loss is at its min, i.e. the loss stops decreasing.\n\n Arguments:\n patience: Number of epochs to wait after min has been hit. After this\n number of no improvement, training stops.\n \"\"\"\n\n def __init__(self, patience=0):\n super(EarlyStoppingAtMinLoss, self).__init__()\n self.patience = patience\n # best_weights to store the weights at which the minimum loss occurs.\n self.best_weights = None\n\n def on_train_begin(self, logs=None):\n # The number of epoch it has waited when loss is no longer minimum.\n self.wait = 0\n # The epoch the training stops at.\n self.stopped_epoch = 0\n # Initialize the best as infinity.\n self.best = np.Inf\n\n def on_epoch_end(self, epoch, logs=None):\n current = logs.get(\"loss\")\n if np.less(current, self.best):\n self.best = current\n self.wait = 0\n # Record the best weights if current results is better (less).\n self.best_weights = self.model.get_weights()\n else:\n self.wait += 1\n if self.wait >= self.patience:\n self.stopped_epoch = epoch\n self.model.stop_training = True\n print(\"Restoring model weights from the end of the best epoch.\")\n self.model.set_weights(self.best_weights)\n\n def on_train_end(self, logs=None):\n if self.stopped_epoch > 0:\n print(\"Epoch %05d: early stopping\" % (self.stopped_epoch + 1))\n\n\nmodel = get_model()\nmodel.fit(\n x_train,\n y_train,\n batch_size=64,\n steps_per_epoch=5,\n epochs=30,\n verbose=0,\n callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],\n)",
"学习率规划\n在此示例中,我们展示了如何在学习过程中使用自定义回调来动态更改优化器的学习率。\n有关更通用的实现,请查看 callbacks.LearningRateScheduler。",
"class CustomLearningRateScheduler(keras.callbacks.Callback):\n \"\"\"Learning rate scheduler which sets the learning rate according to schedule.\n\n Arguments:\n schedule: a function that takes an epoch index\n (integer, indexed from 0) and current learning rate\n as inputs and returns a new learning rate as output (float).\n \"\"\"\n\n def __init__(self, schedule):\n super(CustomLearningRateScheduler, self).__init__()\n self.schedule = schedule\n\n def on_epoch_begin(self, epoch, logs=None):\n if not hasattr(self.model.optimizer, \"lr\"):\n raise ValueError('Optimizer must have a \"lr\" attribute.')\n # Get the current learning rate from model's optimizer.\n lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))\n # Call schedule function to get the scheduled learning rate.\n scheduled_lr = self.schedule(epoch, lr)\n # Set the value back to the optimizer before this epoch starts\n tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)\n print(\"\\nEpoch %05d: Learning rate is %6.4f.\" % (epoch, scheduled_lr))\n\n\nLR_SCHEDULE = [\n # (epoch to start, learning rate) tuples\n (3, 0.05),\n (6, 0.01),\n (9, 0.005),\n (12, 0.001),\n]\n\n\ndef lr_schedule(epoch, lr):\n \"\"\"Helper function to retrieve the scheduled learning rate based on epoch.\"\"\"\n if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:\n return lr\n for i in range(len(LR_SCHEDULE)):\n if epoch == LR_SCHEDULE[i][0]:\n return LR_SCHEDULE[i][1]\n return lr\n\n\nmodel = get_model()\nmodel.fit(\n x_train,\n y_train,\n batch_size=64,\n steps_per_epoch=5,\n epochs=15,\n verbose=0,\n callbacks=[\n LossAndErrorPrintingCallback(),\n CustomLearningRateScheduler(lr_schedule),\n ],\n)",
"内置 Keras 回调函数\n请务必阅读 API 文档查看现有的 Keras 回调函数。应用包括记录到 CSV、保存模型、在 TensorBoard 中可视化指标等等!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
TomTranter/OpenPNM | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | mit | [
"Tutorial 3 of 3: Advanced Topics and Usage\nLearning Outcomes\n\nUse different methods to add boundary pores to a network\nManipulate network topology by adding and removing pores and throats\nExplore the ModelsDict design, including copying models between objects, and changing model parameters\nWrite a custom pore-scale model and a custom Phase\nAccess and manipulate objects associated with the network\nCombine multiple algorithms to predict relative permeability\n\nBuild and Manipulate Network Topology\nFor the present tutorial, we'll keep the topology simple to help keep the focus on other aspects of OpenPNM.",
"import warnings\nimport numpy as np\nimport scipy as sp\nimport openpnm as op\n%matplotlib inline\nnp.random.seed(10)\nws = op.Workspace()\nws.settings['loglevel'] = 40\nnp.set_printoptions(precision=4)\npn = op.network.Cubic(shape=[10, 10, 10], spacing=0.00006, name='net')",
"Adding Boundary Pores\nWhen performing transport simulations it is often useful to have 'boundary' pores attached to the surface(s) of the network where boundary conditions can be applied. When using the Cubic class, two methods are available for doing this: add_boundaries, which is specific for the Cubic class, and add_boundary_pores, which is a generic method that can also be used on other network types and which is inherited from GenericNetwork. The first method automatically adds boundaries to ALL six faces of the network and offsets them from the network by 1/2 of the value provided as the network spacing. The second method provides total control over which boundary pores are created and where they are positioned, but requires the user to specify to which pores the boundary pores should be attached to. Let's explore these two options:",
"pn.add_boundary_pores(labels=['top', 'bottom'])",
"Let's quickly visualize this network with the added boundaries:",
"#NBVAL_IGNORE_OUTPUT\nfig = op.topotools.plot_connections(pn, c='r')\nfig = op.topotools.plot_coordinates(pn, c='b', fig=fig)\nfig.set_size_inches([10, 10])",
"Adding and Removing Pores and Throats\nOpenPNM uses a list-based data storage scheme for all properties, including topological connections. One of the benefits of this approach is that adding and removing pores and throats from the network is essentially as simple as adding or removing rows from the data arrays. The one exception to this 'simplicity' is that the 'throat.conns' array must be treated carefully when trimming pores, so OpenPNM provides the extend and trim functions for adding and removing, respectively. To demonstrate, let's reduce the coordination number of the network to create a more random structure:",
"Ts = np.random.rand(pn.Nt) < 0.1 # Create a mask with ~10% of throats labeled True\nop.topotools.trim(network=pn, throats=Ts) # Use mask to indicate which throats to trim",
"When the trim function is called, it automatically checks the health of the network afterwards, so logger messages might appear on the command line if problems were found such as isolated clusters of pores or pores with no throats. This health check is performed by calling the Network's check_network_health method which returns a HealthDict containing the results of the checks:",
"a = pn.check_network_health()\nprint(a)",
"The HealthDict contains several lists including things like duplicate throats and isolated pores, but also a suggestion of which pores to trim to return the network to a healthy state. Also, the HealthDict has a health attribute that is False is any checks fail.",
"op.topotools.trim(network=pn, pores=a['trim_pores'])",
"Let's take another look at the network to see the trimmed pores and throats:",
"#NBVAL_IGNORE_OUTPUT\nfig = op.topotools.plot_connections(pn, c='r')\nfig = op.topotools.plot_coordinates(pn, c='b', fig=fig)\nfig.set_size_inches([10, 10])",
"Define Geometry Objects\nThe boundary pores we've added to the network should be treated a little bit differently. Specifically, they should have no volume or length (as they are not physically representative of real pores). To do this, we create two separate Geometry objects, one for internal pores and one for the boundaries:",
"Ps = pn.pores('*boundary', mode='not')\nTs = pn.throats('*boundary', mode='not')\ngeom = op.geometry.StickAndBall(network=pn, pores=Ps, throats=Ts, name='intern')\nPs = pn.pores('*boundary')\nTs = pn.throats('*boundary')\nboun = op.geometry.Boundary(network=pn, pores=Ps, throats=Ts, name='boun')",
"The StickAndBall class is preloaded with the pore-scale models to calculate all the necessary size information (pore diameter, pore.volume, throat lengths, throat.diameter, etc). The Boundary class is speciall and is only used for the boundary pores. In this class, geometrical properties are set to small fixed values such that they don't affect the simulation results. \nDefine Multiple Phase Objects\nIn order to simulate relative permeability of air through a partially water-filled network, we need to create each Phase object. OpenPNM includes pre-defined classes for each of these common fluids:",
"air = op.phases.Air(network=pn)\nwater = op.phases.Water(network=pn)\nwater['throat.contact_angle'] = 110\nwater['throat.surface_tension'] = 0.072",
"Aside: Creating a Custom Phase Class\nIn many cases you will want to create your own fluid, such as an oil or brine, which may be commonly used in your research. OpenPNM cannot predict all the possible scenarios, but luckily it is easy to create a custom Phase class as follows:",
"from openpnm.phases import GenericPhase\n\nclass Oil(GenericPhase):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.add_model(propname='pore.viscosity',\n model=op.models.misc.polynomial,\n prop='pore.temperature',\n a=[1.82082e-2, 6.51E-04, -3.48E-7, 1.11E-10])\n self['pore.molecular_weight'] = 116 # g/mol",
"Creating a Phase class basically involves placing a series of self.add_model commands within the __init__ section of the class definition. This means that when the class is instantiated, all the models are added to itself (i.e. self).\n**kwargs is a Python trick that captures all arguments in a dict called kwargs and passes them to another function that may need them. In this case they are passed to the __init__ method of Oil's parent by the super function. Specifically, things like name and network are expected.\nThe above code block also stores the molecular weight of the oil as a constant value\nAdding models and constant values in this way could just as easily be done in a run script, but the advantage of defining a class is that it can be saved in a file (i.e. 'my_custom_phases') and reused in any project.",
"oil = Oil(network=pn)\nprint(oil)",
"Define Physics Objects for Each Geometry and Each Phase\nIn the tutorial #2 we created two Physics object, one for each of the two Geometry objects used to handle the stratified layers. In this tutorial, the internal pores and the boundary pores each have their own Geometry, but there are two Phases, which also each require a unique Physics:",
"phys_water_internal = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom)\nphys_air_internal = op.physics.GenericPhysics(network=pn, phase=air, geometry=geom)\nphys_water_boundary = op.physics.GenericPhysics(network=pn, phase=water, geometry=boun)\nphys_air_boundary = op.physics.GenericPhysics(network=pn, phase=air, geometry=boun)",
"To reiterate, one Physics object is required for each Geometry AND each Phase, so the number can grow to become annoying very quickly Some useful tips for easing this situation are given below.\n\nCreate a Custom Pore-Scale Physics Model\nPerhaps the most distinguishing feature between pore-network modeling papers is the pore-scale physics models employed. Accordingly, OpenPNM was designed to allow for easy customization in this regard, so that you can create your own models to augment or replace the ones included in the OpenPNM models libraries. For demonstration, let's implement the capillary pressure model proposed by Mason and Morrow in 1994. They studied the entry pressure of non-wetting fluid into a throat formed by spheres, and found that the converging-diverging geometry increased the capillary pressure required to penetrate the throat. As a simple approximation they proposed $P_c = -2 \\sigma \\cdot cos(2/3 \\theta) / R_t$\nPore-scale models are written as basic function definitions:",
"def mason_model(target, diameter='throat.diameter', theta='throat.contact_angle', \n sigma='throat.surface_tension', f=0.6667):\n proj = target.project\n network = proj.network\n phase = proj.find_phase(target)\n Dt = network[diameter]\n theta = phase[theta]\n sigma = phase[sigma]\n Pc = 4*sigma*np.cos(f*np.deg2rad(theta))/Dt\n return Pc[phase.throats(target.name)]",
"Let's examine the components of above code:\n\nThe function receives a target object as an argument. This indicates which object the results will be returned to. \nThe f value is a scale factor that is applied to the contact angle. Mason and Morrow suggested a value of 2/3 as a decent fit to the data, but we'll make this an adjustable parameter with 2/3 as the default.\nNote the pore.diameter is actually a Geometry property, but it is retrieved via the network using the data exchange rules outlined in the second tutorial.\nAll of the calculations are done for every throat in the network, but this pore-scale model may be assigned to a target like a Physics object, that is a subset of the full domain. As such, the last line extracts values from the Pc array for the location of target and returns just the subset.\nThe actual values of the contact angle, surface tension, and throat diameter are NOT sent in as numerical arrays, but rather as dictionary keys to the arrays. There is one very important reason for this: if arrays had been sent, then re-running the model would use the same arrays and hence not use any updated values. By having access to dictionary keys, the model actually looks up the current values in each of the arrays whenever it is run.\nIt is good practice to include the dictionary keys as arguments, such as sigma = 'throat.contact_angle'. This way the user can control where the contact angle could be stored on the target object.\n\nCopy Models Between Physics Objects\nAs mentioned above, the need to specify a separate Physics object for each Geometry and Phase can become tedious. It is possible to copy the pore-scale models assigned to one object onto another object. First, let's assign the models we need to phys_water_internal:",
"mod = op.models.physics.hydraulic_conductance.hagen_poiseuille\nphys_water_internal.add_model(propname='throat.hydraulic_conductance',\n model=mod)\n\nphys_water_internal.add_model(propname='throat.entry_pressure',\n model=mason_model)",
"Now make a copy of the models on phys_water_internal and apply it all the other water Physics objects:",
"phys_water_boundary.models = phys_water_internal.models",
"The only 'gotcha' with this approach is that each of the Physics objects must be regenerated in order to place numerical values for all the properties into the data arrays:",
"phys_water_boundary.regenerate_models()\nphys_air_internal.regenerate_models()\nphys_air_internal.regenerate_models()",
"Adjust Pore-Scale Model Parameters\nThe pore-scale models are stored in a ModelsDict object that is itself stored under the models attribute of each object. This arrangement is somewhat convoluted, but it enables integrated storage of models on the object's wo which they apply. The models on an object can be inspected with print(phys_water_internal), which shows a list of all the pore-scale properties that are computed by a model, and some information about the model's regeneration mode.\nEach model in the ModelsDict can be individually inspected by accessing it using the dictionary key corresponding to pore-property that it calculates, i.e. print(phys_water_internal)['throat.capillary_pressure']). This shows a list of all the parameters associated with that model. It is possible to edit these parameters directly:",
"phys_water_internal.models['throat.entry_pressure']['f'] = 0.75 # Change value\nphys_water_internal.regenerate_models() # Regenerate model with new 'f' value",
"More details about the ModelsDict and ModelWrapper classes can be found in :ref:models.\nPerform Multiphase Transport Simulations\nUse the Built-In Drainage Algorithm to Generate an Invading Phase Configuration",
"inv = op.algorithms.Porosimetry(network=pn)\ninv.setup(phase=water)\ninv.set_inlets(pores=pn.pores(['top', 'bottom']))\ninv.run()",
"The inlet pores were set to both 'top' and 'bottom' using the pn.pores method. The algorithm applies to the entire network so the mapping of network pores to the algorithm pores is 1-to-1.\nThe run method automatically generates a list of 25 capillary pressure points to test, but you can also specify more pores, or which specific points to tests. See the methods documentation for the details.\nOnce the algorithm has been run, the resulting capillary pressure curve can be viewed with plot_drainage_curve. If you'd prefer a table of data for plotting in your software of choice you can use get_drainage_data which prints a table in the console.\n\nSet Pores and Throats to Invaded\nAfter running, the mip object possesses an array containing the pressure at which each pore and throat was invaded, stored as 'pore.inv_Pc' and 'throat.inv_Pc'. These arrays can be used to obtain a list of which pores and throats are invaded by water, using Boolean logic:",
"Pi = inv['pore.invasion_pressure'] < 5000\nTi = inv['throat.invasion_pressure'] < 5000",
"The resulting Boolean masks can be used to manually adjust the hydraulic conductivity of pores and throats based on their phase occupancy. The following lines set the water filled throats to near-zero conductivity for air flow:",
"Ts = phys_water_internal.map_throats(~Ti, origin=water)\nphys_water_internal['throat.hydraulic_conductance'][Ts] = 1e-20",
"The logic of these statements implicitly assumes that transport between two pores is only blocked if the throat is filled with the other phase, meaning that both pores could be filled and transport is still permitted. Another option would be to set the transport to near-zero if either or both of the pores are filled as well.\nThe above approach can get complicated if there are several Geometry objects, and it is also a bit laborious. There is a pore-scale model for this under Physics.models.multiphase called conduit_conductance. The term conduit refers to the path between two pores that includes 1/2 of each pores plus the connecting throat.\n\nCalculate Relative Permeability of Each Phase\nWe are now ready to calculate the relative permeability of the domain under partially flooded conditions. Instantiate an StokesFlow object:",
"water_flow = op.algorithms.StokesFlow(network=pn, phase=water)\nwater_flow.set_value_BC(pores=pn.pores('left'), values=200000)\nwater_flow.set_value_BC(pores=pn.pores('right'), values=100000)\nwater_flow.run()\nQ_partial, = water_flow.rate(pores=pn.pores('right'))",
"The relative permeability is the ratio of the water flow through the partially water saturated media versus through fully water saturated media; hence we need to find the absolute permeability of water. This can be accomplished by regenerating the phys_water_internal object, which will recalculate the 'throat.hydraulic_conductance' values and overwrite our manually entered near-zero values from the inv simulation using phys_water_internal.models.regenerate(). We can then re-use the water_flow algorithm:",
"phys_water_internal.regenerate_models()\nwater_flow.run()\nQ_full, = water_flow.rate(pores=pn.pores('right'))",
"And finally, the relative permeability can be found from:",
"K_rel = Q_partial/Q_full\nprint(f\"Relative permeability: {K_rel:.5f}\")",
"The ratio of the flow rates gives the normalized relative permeability since all the domain size, viscosity and pressure differential terms cancel each other.\nTo generate a full relative permeability curve the above logic would be placed inside a for loop, with each loop increasing the pressure threshold used to obtain the list of invaded throats (Ti).\nThe saturation at each capillary pressure can be found be summing the pore and throat volume of all the invaded pores and throats using Vp = geom['pore.volume'][Pi] and Vt = geom['throat.volume'][Ti]."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
quantopian/research_public | notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb | apache-2.0 | [
"Exercises: Comparing ETFs - Answer Key\nBy Christopher van Hoecke, Maxwell Margenot, and Delaney Mackenzie\nLecture Link :\nhttps://www.quantopian.com/lectures/statistical-moments\nhttps://www.quantopian.com/lectures/hypothesis-testing\nIMPORTANT NOTE:\nThis lecture corresponds to the statistical moments and hypothesis testing lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.\nWhen you feel comfortable with the topics presented here, see if you can create an algorithm that qualifies for the Quantopian Contest. Participants are evaluated on their ability to produce risk-constrained alpha and the top 10 contest participants are awarded cash prizes on a daily basis.\nhttps://www.quantopian.com/contest\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public\n\n\nKey Concepts\nt-statistic formula for unequal variances : $ t = \\frac{\\bar{X}_1 - \\bar{X}_2}{(\\frac{s_1^2}{n_1} + \\frac{s_2^2}{n_2})^{1/2}}$\nWhere $s_1$ and $s_2$ are the standard deviation of set 1 and set 2; and $n_1$ and $n_2$ are the number of observations we have.",
"# Useful functions\ndef normal_test(X):\n z, pval = stats.normaltest(X)\n if pval < 0.05:\n print 'Values are not normally distributed.'\n else: \n print 'Values are normally distributed.'\n return\n\n# Useful Libraries\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport seaborn as sns",
"Data:",
"# Get pricing data for an energy (XLE) and industrial (XLI) ETF\nxle = get_pricing('XLE', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')\nxli = get_pricing('XLI', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')\n\n# Compute returns\nxle_returns = xle.pct_change()[1:]\nxli_returns = xli.pct_change()[1:]",
"Exercise 1 : Hypothesis Testing on Variances.\n\nPlot the histogram of the returns of XLE and XLI\nCheck to see if each return stream is normally distributed\nIf the assets are normally distributed, use the F-test to perform a hypothesis test and decide whether they have the two assets have the same variance.\nIf the assets are not normally distributed, use the Levene test (in the scipy library) to perform a hypothesis test on variance.",
"xle = plt.hist(xle_returns, bins=30)\nxli = plt.hist(xli_returns, bins=30, color='r')\n\nplt.xlabel('returns')\nplt.ylabel('Frequency')\nplt.title('Histogram of the returns of XLE and XLI')\nplt.legend(['XLE returns', 'XLI returns']);\n\n# Checking for normality using function above. \n\nprint 'XLE'\nnormal_test(xle_returns)\nprint 'XLI'\nnormal_test(xli_returns)\n\n# Because the data is not normally distributed, we must use the levene and not the F-test of variance. \n\nstats.levene(xle_returns, xli_returns)",
"Since we find a pvalue for the Levene test of less than our $\\alpha$ level (0.05), we can reject the null hypothesis that the variability of the two groups are equal thus implying that the variances are unequal.\n\nExercise 2 : Hypothesis Testing on Means.\nSince we know that the variances are not equal, we must use Welch's t-test. \n- Calculate the mean returns of XLE and XLI.\n - Find the difference between the two means.\n- Calculate the standard deviation of the returns of XLE and XLI\n- Using the formula given above, calculate the t-test statistic (Using $\\alpha = 0.05$) for Welch's t-test to test whether the mean returns of XLE and XLI are different.\n- Consult the Hypothesis Testing Lecture to calculate the p-value for this test. Are the mean returns of XLE and XLI the same?\n\nNow use the t-test function for two independent samples from the scipy library. Compare the results.",
"# Manually calculating the t-statistic\n\nN1 = len(xle_returns)\nN2 = len(xli_returns)\n\nm1 = xle_returns.mean()\nm2 = xli_returns.mean()\n\ns1 = xle_returns.std()\ns2 = xli_returns.std()\n\ntest_statistic = (m1 - m2) / (s1**2 / N1 + s2**2 / N2)**0.5\nprint 't-test statistic:', test_statistic\n\n# Alternative form, using the scipy library on python. \n\nstats.ttest_ind(xle_returns, xli_returns, equal_var=False)",
"Exercise 3 : Skewness\n\nCalculate the mean and median of the two assets\nCalculate the skewness using the scipy library",
"# Calculate the mean and median of xle and xli using the numpy library\n\nxle_mean = np.mean(xle_returns)\nxle_median = np.median(xle_returns)\nprint 'Mean of XLE returns = ', xle_mean, '; median = ', xle_median\n\nxli_mean = np.mean(xli_returns)\nxli_median = np.median(xli_returns)\nprint 'Mean of XLI returns = ', xli_mean, '; median = ', xli_median\n\n# Print values of Skewness for xle and xli returns \n\nprint 'Skew of XLE returns:', stats.skew(xle_returns)\nprint 'Skew of XLI returns:', stats.skew(xli_returns)",
"And the skewness of XLE returns of values > 0 means that there is more weight in the right tail of the distribution. The skewness of XLI returns of value > 0 means that there is more weight in the left tail of the distribution.\n\nExercise 4 : Kurtosis\n\nCheck the kurtosis of the two assets, using the scipy library. \nUsing the seaborn library, plot the distribution of XLE and XLI returns. \n\nRecall: \n- Kurtosis > 3 is leptokurtic, a highly peaked, narrow deviation from the mean\n- Kurtosis = 3 is mesokurtic. The most significant mesokurtic distribution is the normal distribution family. \n- Kurtosis < 3 is platykurtic, a lower-peaked, broad deviation from the mean",
"# Print value of Kurtosis for xle and xli returns \n\nprint 'kurtosis:', stats.kurtosis(xle_returns)\nprint 'kurtosis:', stats.kurtosis(xli_returns)\n\n# Distribution plot of XLE returns in red (for Kurtosis of 1.6). \n# Distribution plot of XLI returns in blue (for Kurtosis of 2.0).\n\nxle = sns.distplot(xle_returns, color = 'r', axlabel = 'xle')\nxli = sns.distplot(xli_returns, axlabel = 'xli');",
"We can clearly see from the two graphs that as our kurtosis gets lower, the distribution gets more flat.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Danghor/Algorithms | Python/Chapter-09/Dijkstra.ipynb | gpl-2.0 | [
"from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)",
"Dijkstra's Shortest Path Algorithm\nThe notebook Set.ipynb implements <em style=\"color:blue\">sets</em> as\n<a href=\"https://en.wikipedia.org/wiki/AVL_tree\">AVL trees</a>.\nThe API provided by Set offers the following API:\n- Set() creates an empty set.\n- S.isEmpty() checks whether the set Sis empty.\n- S.member(x) checks whether x is an element of the given set S.\n- S.insert(x) inserts x into the set S.\n This does not return a new set but rather modifies the given set S.\n- S.delete(x) deletes x from the set S.\n This does not return a new set but rather modifies the set S.\n- S.pop() returns the <em style=\"color:blue\">smallest element</em> of the set S.\n Furthermore, this element is removed from the given set S.\nSince sets are implemented as ordered binary trees, the elements of a set need to be comparable, i.e. if \nx and y are inserted into a set, then the expression x < y has to be defined and has to return a \nBoolean value. Furthermore, the relation < has to be a \n<a href=\"https://en.wikipedia.org/wiki/linear_order\">linear order</a>.\nThe class Set can be used to implement a priority queue that supports the \n<em style=\"color:blue\">removal</em> of elements.",
"%run Set.ipynb ",
"The function call shortest_path takes a node source and a set Edges.\nThe function shortest_path takes two arguments.\n- source is the start node.\n- Edges is a dictionary that encodes the set of edges of the graph. For every node x the value of Edges[x] has the form\n $$ \\bigl[ (y_1, l_1), \\cdots, (y_n, l_n) \\bigr]. $$\n This list is interpreted as follows: For every $i = 1,\\cdots,n$ there is an edge\n $(x, y_i)$ pointing from $x$ to $y_i$ and this edge has the length $l_i$.\nThe function returns the dictionary Distance. For every node u such that there is a path from source to \nu, Distance[u] is the length of the shortest path from source to u. The implementation uses \n<a href=\"https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm\">Dijkstra's algorithm</a> and proceeds as follows:\n\nDistance is a dictionary mapping nodes to their estimated distance from the node\n source. If d = Distance[x], then we know that there is a path of length d leading\n from source to x. However, in general we do not know whether there is a path shorter\n than d that also connects the source to the node x.\nThe function shortest_path maintains an additional variable called Visited.\n This variable contains the set of those nodes that have been <em style=\"color:blue\">visited</em> \n by the algorithm.\n To be more precise, Visited contains those nodes u that have been removed from the\n Fringe and for which all neighboring nodes, i.e. those nodes y such that\n there is an edge (u,y), have been examined. It can be shown that once a node u is added to\n Visited, Distance[u] is the length of the shortest path from source to u.\nFringe is a priority queue that contains pairs of the form (d, x), where x is a node and d\n is the distance that x has from the node source. This priority queue is implemented as a set,\n which in turn is represented by an ordered binary tree. The fact that we store the node x and the\n distance d as a pair (d,x) implies that the distances are used as priorities because pairs are\n compared lexicographically.\n Initially the only node that is known to be\n reachable from source is the node source. Hence Fringe is initialized as the\n set { (0, source) }.\nAs long as the set Fringe is not empty, line 7 of the implementation removes that node u\n from the set Fringe that has the smallest distance d from the node source.\nNext, all edges leading away from u are visited. If there is an edge (u, v) that has length l,\n then we check whether the node v has already a distance assigned. If the node v already has the\n distance dv assigned but the value d + l is less than dv, then we have found a\n shorter path from source to v. This path leads from source to u and then proceeds\n to v via the edge (u,v).\nIf v had already been visited before and hence dv=Distance[v] is defined, we\n have to update the priority of the v in the Fringe. The easiest way to do this is to remove\n the old pair (dv, v) from the Fringe and replace this pair by the new pair\n (d+l, v), because d+l is the new estimate of the distance between source and v and\n d+l is the new priority of v.\nOnce we have inspected all neighbours of the node u, u is added to the set of those nodes that have\n been Visited.\nWhen the Fringe has been exhausted, the dictionary Distance contains the distances of\n every node that is reachable from the node source",
"def shortest_path(source, Edges):\n Distance = { source: 0 }\n Visited = { source }\n Fringe = Set()\n Fringe.insert( (0, source) )\n while not Fringe.isEmpty():\n d, u = Fringe.pop() # get and remove smallest element\n for v, l in Edges[u]:\n dv = Distance.get(v, None)\n if dv == None or d + l < dv:\n if dv != None:\n Fringe.delete( (dv, v) )\n Distance[v] = d + l\n Fringe.insert( (d + l, v) )\n Visited.add(u)\n return Distance",
"The version of shortest_path given below provides a graphical animation of the algorithm.",
"def shortest_path(source, Edges):\n Distance = { source: 0 }\n Visited = { source } # set only needed for visualization\n Fringe = Set()\n Fringe.insert( (0, source) )\n while not Fringe.isEmpty():\n d, u = Fringe.pop()\n display(toDot(source, u, Edges, Fringe, Distance, Visited))\n print('_' * 80)\n for v, l in Edges[u]:\n dv = Distance.get(v, None)\n if dv == None or d + l < dv:\n if dv != None:\n Fringe.delete( (dv, v) )\n Distance[v] = d + l\n Fringe.insert( (d + l, v) )\n Visited.add(u)\n display(toDot(source, None, Edges, Fringe, Distance, Visited))\n return Distance",
"Code to Display the Directed Graph",
"import graphviz as gv",
"The function $\\texttt{toDot}(\\texttt{source}, \\texttt{Edges}, \\texttt{Fringe}, \\texttt{Distance}, \\texttt{Visited})$ takes a graph that is represented by \nits Edges, a set of nodes Fringe, and a dictionary Distance that has the distance of a node from the node source, and set Visited of nodes that have already been visited.",
"def toDot(source, p, Edges, Fringe, Distance, Visited):\n V = set()\n for x in Edges.keys():\n V.add(x)\n dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})\n dot.attr(rankdir='LR', size='8,5')\n for x in V:\n if x == source:\n dot.node(str(x), color='blue', shape='doublecircle')\n else:\n d = str(Distance.get(x, ''))\n if x == p:\n dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='magenta')\n elif x in Distance and Fringe.member( (Distance[x], x) ):\n dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='red')\n elif x in Visited:\n dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='blue')\n else:\n dot.node(str(x), label='{' + str(x) + '|' + d + '}')\n for u in V:\n for v, l in Edges[u]:\n dot.edge(str(u), str(v), label=str(l))\n return dot",
"Code for Testing",
"Edges = { 'a': [ ('c', 2), ('b', 9)], \n 'b': [('d', 1)],\n 'c': [('e', 5), ('g', 3)], \n 'd': [('f', 2), ('e', 4)], \n 'e': [('f', 1), ('b', 2)],\n 'f': [('h', 5)],\n 'g': [('e', 1)],\n 'h': []\n }\n\ns = 'a'\nsp = shortest_path(s, Edges)\nsp",
"Crossing the Tunnel\nFour persons, Alice, Britney, Charly and Daniel have to cross a tunnel.\nThe tunnel is so narrow, that at most two persons can cross it together.\nIn order to cross the tunnel, a torch is needed. Together, they only\nhave a single torch.\n 1. Alice is the fastest and can cross the tunnel in 1 minute.\n 2. Britney needs 2 minutes to cross the tunnel.\n 3. Charly is slower and needs 4 minutes.\n 4. Daniel is slowest and takes 5 minutes to cross the tunnel.\nWhat is the fastest plan to cross the tunnel?\nWe will model this problem as a graph theoretical problem. The nodes of the graph will be sets \nof people. In particular, it will be the set of people at the entrance of the tunnel. In order to model the torch, the torch can also be a member of these sets.",
"All = frozenset({ 'Alice', 'Britney', 'Charly', 'Daniel', 'Torch' })",
"The timining is modelled by a dictionary.",
"Time = { 'Alice': 1, 'Britney': 2, 'Charly': 4, 'Daniel': 5, 'Torch': 0 }",
"The function $\\texttt{power}(M)$ defined below computes the power list of the set $M$, i.e. we have:\n$$ \\texttt{power}(M) = 2^M = \\bigl{A \\mid A \\subseteq M \\bigr} $$",
"def power(M):\n if M == set():\n return { frozenset() }\n else:\n C = set(M) # C is a copy of M as we don't want to change the set M\n x = C.pop() # pop removes the element x from the set C\n P1 = power(C)\n P2 = { A | {x} for A in P1 }\n return P1 | P2",
"If $B$ is a set of persons, then $\\texttt{duration}(B)$ is the time that this group needs to cross the tunnel.\n$B$ also contains 'Torch'.",
"def duration(B):\n return max(Time[x] for x in B)",
"$\\texttt{left_right}(S)$ describes a crossing of the tunnel from the entrance at the left side left to the exit at the right side of the tunnel.",
"def left_right(S):\n return [(S - B, duration(B)) for B in power(S) if 'Torch' in B and 2 <= len(B) <= 3]",
"$\\texttt{right_left}(S)$ describes a crossing of the tunnel from right to left.",
"def right_left(S):\n return [(S | B, duration(B)) for B in power(All - S) if 'Torch' in B and 2 <= len(B) <= 3]\n\nEdges = { S: left_right(S) + right_left(S) for S in power(All) }\nlen(Edges)",
"The function shortest_path is Dijkstra's algorithm. It returns both a dictionary Parent containing \nthe parent nodes and a dictionary Distance with the distances. The dictionary Parent can be used to\ncompute the shortest path leading from the node source to some other node.",
"def shortest_path(source, Edges):\n Distance = { source: 0 }\n Parent = {}\n Fringe = Set()\n Fringe.insert( (0, source) )\n while not Fringe.isEmpty():\n d, u = Fringe.pop()\n for v, l in Edges[u]:\n dv = Distance.get(v, None)\n if dv == None or d + l < dv:\n if dv != None:\n Fringe.delete( (dv, v) )\n Distance[v] = d + l\n Fringe.insert( (d + l, v) )\n Parent[v] = u\n return Parent, Distance\n\nParent, Distance = shortest_path(frozenset(All), Edges)",
"Let us see whether the goal was reachable and how long it takes to reach the goal.",
"goal = frozenset()\nDistance[goal]",
"Given to nodes source and goal and a dictionary containing the parent of every node, the function\nfind_path returns the path from source to goal.",
"def find_path(source, goal, Parent):\n p = Parent.get(goal)\n if p == None:\n return [source]\n return find_path(source, p, Parent) + [goal]\n\nPath = find_path(frozenset(All), frozenset(), Parent)\n\ndef print_path():\n total = 0\n print(\"_\" * 81);\n for i in range(len(Path)):\n Left = set(Path[i])\n Right = set(All) - set(Left)\n if Left == set() or Right == set():\n print(Left, \" \" * 25, Right)\n else:\n print(Left, \" \" * 30, Right)\n print(\"_\" * 81);\n if i < len(Path) - 1:\n if \"Torch\" in Path[i]:\n Diff = set(Path[i]) - set(Path[i+1])\n time = duration(Diff)\n total += time\n print(\" \" * 20, \">>> \", Diff, ':', time, \" >>>\")\n else:\n Diff = set(Path[i+1]) - set(Path[i])\n time = duration(Diff)\n total += time\n print(\" \" * 20, \"<<< \", Diff, ':', time, \" <<<\")\n print(\"_\" * 81)\n print('Total time:', total, 'minutes.')\n\nprint_path()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
drabastomek/learningPySpark | Chapter06/LearningPySpark_Chapter06.ipynb | gpl-3.0 | [
"Introducing ML package of PySpark\nPredict chances of infant survival with ML\nLoad the data\nFirst, we load the data.",
"import pyspark.sql.types as typ\n\nlabels = [\n ('INFANT_ALIVE_AT_REPORT', typ.IntegerType()),\n ('BIRTH_PLACE', typ.StringType()),\n ('MOTHER_AGE_YEARS', typ.IntegerType()),\n ('FATHER_COMBINED_AGE', typ.IntegerType()),\n ('CIG_BEFORE', typ.IntegerType()),\n ('CIG_1_TRI', typ.IntegerType()),\n ('CIG_2_TRI', typ.IntegerType()),\n ('CIG_3_TRI', typ.IntegerType()),\n ('MOTHER_HEIGHT_IN', typ.IntegerType()),\n ('MOTHER_PRE_WEIGHT', typ.IntegerType()),\n ('MOTHER_DELIVERY_WEIGHT', typ.IntegerType()),\n ('MOTHER_WEIGHT_GAIN', typ.IntegerType()),\n ('DIABETES_PRE', typ.IntegerType()),\n ('DIABETES_GEST', typ.IntegerType()),\n ('HYP_TENS_PRE', typ.IntegerType()),\n ('HYP_TENS_GEST', typ.IntegerType()),\n ('PREV_BIRTH_PRETERM', typ.IntegerType())\n]\n\nschema = typ.StructType([\n typ.StructField(e[0], e[1], False) for e in labels\n])\n\nbirths = spark.read.csv('births_transformed.csv.gz', \n header=True, \n schema=schema)",
"Create transformers",
"import pyspark.ml.feature as ft\n\nbirths = births \\\n .withColumn( 'BIRTH_PLACE_INT', \n births['BIRTH_PLACE'] \\\n .cast(typ.IntegerType()))",
"Having done this, we can now create our first Transformer.",
"encoder = ft.OneHotEncoder(\n inputCol='BIRTH_PLACE_INT', \n outputCol='BIRTH_PLACE_VEC')",
"Let's now create a single column with all the features collated together.",
"featuresCreator = ft.VectorAssembler(\n inputCols=[\n col[0] \n for col \n in labels[2:]] + \\\n [encoder.getOutputCol()], \n outputCol='features'\n)",
"Create an estimator\nIn this example we will (once again) us the Logistic Regression model.",
"import pyspark.ml.classification as cl",
"Once loaded, let's create the model.",
"logistic = cl.LogisticRegression(\n maxIter=10, \n regParam=0.01, \n labelCol='INFANT_ALIVE_AT_REPORT')",
"Create a pipeline\nAll that is left now is to creat a Pipeline and fit the model. First, let's load the Pipeline from the package.",
"from pyspark.ml import Pipeline\n\npipeline = Pipeline(stages=[\n encoder, \n featuresCreator, \n logistic\n ])",
"Fit the model\nConventiently, DataFrame API has the .randomSplit(...) method.",
"births_train, births_test = births \\\n .randomSplit([0.7, 0.3], seed=666)",
"Now run our pipeline and estimate our model.",
"model = pipeline.fit(births_train)\ntest_model = model.transform(births_test)",
"Here's what the test_model looks like.",
"test_model.take(1)",
"Model performance\nObviously, we would like to now test how well our model did.",
"import pyspark.ml.evaluation as ev\n\nevaluator = ev.BinaryClassificationEvaluator(\n rawPredictionCol='probability', \n labelCol='INFANT_ALIVE_AT_REPORT')\n\nprint(evaluator.evaluate(test_model, \n {evaluator.metricName: 'areaUnderROC'}))\nprint(evaluator.evaluate(test_model, {evaluator.metricName: 'areaUnderPR'}))",
"Saving the model\nPySpark allows you to save the Pipeline definition for later use.",
"pipelinePath = './infant_oneHotEncoder_Logistic_Pipeline'\npipeline.write().overwrite().save(pipelinePath)",
"So, you can load it up later and use straight away to .fit(...) and predict.",
"loadedPipeline = Pipeline.load(pipelinePath)\nloadedPipeline \\\n .fit(births_train)\\\n .transform(births_test)\\\n .take(1)",
"You can also save the whole model",
"from pyspark.ml import PipelineModel\n\nmodelPath = './infant_oneHotEncoder_Logistic_PipelineModel'\nmodel.write().overwrite().save(modelPath)\n\nloadedPipelineModel = PipelineModel.load(modelPath)\ntest_loadedModel = loadedPipelineModel.transform(births_test)",
"Parameter hyper-tuning\nGrid search\nLoad the .tuning part of the package.",
"import pyspark.ml.tuning as tune",
"Next let's specify our model and the list of parameters we want to loop through.",
"logistic = cl.LogisticRegression(\n labelCol='INFANT_ALIVE_AT_REPORT')\n\ngrid = tune.ParamGridBuilder() \\\n .addGrid(logistic.maxIter, \n [2, 10, 50]) \\\n .addGrid(logistic.regParam, \n [0.01, 0.05, 0.3]) \\\n .build()",
"Next, we need some way of comparing the models.",
"evaluator = ev.BinaryClassificationEvaluator(\n rawPredictionCol='probability', \n labelCol='INFANT_ALIVE_AT_REPORT')",
"Create the logic that will do the validation work for us.",
"cv = tune.CrossValidator(\n estimator=logistic, \n estimatorParamMaps=grid, \n evaluator=evaluator\n)",
"Create a purely transforming Pipeline.",
"pipeline = Pipeline(stages=[encoder,featuresCreator])\ndata_transformer = pipeline.fit(births_train)",
"Having done this, we are ready to find the optimal combination of parameters for our model.",
"cvModel = cv.fit(data_transformer.transform(births_train))",
"The cvModel will return the best model estimated. We can now use it to see if it performed better than our previous model.",
"data_train = data_transformer \\\n .transform(births_test)\nresults = cvModel.transform(data_train)\n\nprint(evaluator.evaluate(results, \n {evaluator.metricName: 'areaUnderROC'}))\nprint(evaluator.evaluate(results, \n {evaluator.metricName: 'areaUnderPR'}))",
"What parameters has the best model? The answer is a little bit convoluted but here's how you can extract it.",
"results = [\n (\n [\n {key.name: paramValue} \n for key, paramValue \n in zip(\n params.keys(), \n params.values())\n ], metric\n ) \n for params, metric \n in zip(\n cvModel.getEstimatorParamMaps(), \n cvModel.avgMetrics\n )\n]\n\nsorted(results, \n key=lambda el: el[1], \n reverse=True)[0]",
"Train-Validation splitting\nUse the ChiSqSelector to select only top 5 features, thus limiting the complexity of our model.",
"selector = ft.ChiSqSelector(\n numTopFeatures=5, \n featuresCol=featuresCreator.getOutputCol(), \n outputCol='selectedFeatures',\n labelCol='INFANT_ALIVE_AT_REPORT'\n)\n\nlogistic = cl.LogisticRegression(\n labelCol='INFANT_ALIVE_AT_REPORT',\n featuresCol='selectedFeatures'\n)\n\npipeline = Pipeline(stages=[encoder,featuresCreator,selector])\ndata_transformer = pipeline.fit(births_train)",
"The TrainValidationSplit object gets created in the same fashion as the CrossValidator model.",
"tvs = tune.TrainValidationSplit(\n estimator=logistic, \n estimatorParamMaps=grid, \n evaluator=evaluator\n)",
"As before, we fit our data to the model, and calculate the results.",
"tvsModel = tvs.fit(\n data_transformer \\\n .transform(births_train)\n)\n\ndata_train = data_transformer \\\n .transform(births_test)\nresults = tvsModel.transform(data_train)\n\nprint(evaluator.evaluate(results, \n {evaluator.metricName: 'areaUnderROC'}))\nprint(evaluator.evaluate(results, \n {evaluator.metricName: 'areaUnderPR'}))",
"Other features of PySpark ML in action\nFeature extraction\nNLP related feature extractors\nSimple dataset.",
"text_data = spark.createDataFrame([\n ['''Machine learning can be applied to a wide variety \n of data types, such as vectors, text, images, and \n structured data. This API adopts the DataFrame from \n Spark SQL in order to support a variety of data types.'''],\n ['''DataFrame supports many basic and structured types; \n see the Spark SQL datatype reference for a list of \n supported types. In addition to the types listed in \n the Spark SQL guide, DataFrame can use ML Vector types.'''],\n ['''A DataFrame can be created either implicitly or \n explicitly from a regular RDD. See the code examples \n below and the Spark SQL programming guide for examples.'''],\n ['''Columns in a DataFrame are named. The code examples \n below use names such as \"text,\" \"features,\" and \"label.\"''']\n], ['input'])",
"First, we need to tokenize this text.",
"tokenizer = ft.RegexTokenizer(\n inputCol='input', \n outputCol='input_arr', \n pattern='\\s+|[,.\\\"]')",
"The output of the tokenizer looks similar to this.",
"tok = tokenizer \\\n .transform(text_data) \\\n .select('input_arr') \n\ntok.take(1)",
"Use the StopWordsRemover(...).",
"stopwords = ft.StopWordsRemover(\n inputCol=tokenizer.getOutputCol(), \n outputCol='input_stop')",
"The output of the method looks as follows",
"stopwords.transform(tok).select('input_stop').take(1)",
"Build NGram model and the Pipeline.",
"ngram = ft.NGram(n=2, \n inputCol=stopwords.getOutputCol(), \n outputCol=\"nGrams\")\n\npipeline = Pipeline(stages=[tokenizer, stopwords, ngram])",
"Now that we have the pipeline we follow in the very similar fashion as before.",
"data_ngram = pipeline \\\n .fit(text_data) \\\n .transform(text_data)\n \ndata_ngram.select('nGrams').take(1)",
"That's it. We got our n-grams and we can then use them in further NLP processing.\nDiscretize continuous variables\nIt is sometimes useful to band the values into discrete buckets.",
"import numpy as np\n\nx = np.arange(0, 100)\nx = x / 100.0 * np.pi * 4\ny = x * np.sin(x / 1.764) + 20.1234\n\nschema = typ.StructType([\n typ.StructField('continuous_var', \n typ.DoubleType(), \n False\n )\n])\n\ndata = spark.createDataFrame([[float(e), ] for e in y], schema=schema)",
"Use the QuantileDiscretizer model to split our continuous variable into 5 buckets (see the numBuckets parameter).",
"discretizer = ft.QuantileDiscretizer(\n numBuckets=5, \n inputCol='continuous_var', \n outputCol='discretized')",
"Let's see what we got.",
"data_discretized = discretizer.fit(data).transform(data)\n\ndata_discretized \\\n .groupby('discretized')\\\n .mean('continuous_var')\\\n .sort('discretized')\\\n .collect()",
"Standardizing continuous variables\nCreate a vector representation of our continuous variable (as it is only a single float)",
"vectorizer = ft.VectorAssembler(\n inputCols=['continuous_var'], \n outputCol= 'continuous_vec')",
"Build a normalizer and a pipeline.",
"normalizer = ft.StandardScaler(\n inputCol=vectorizer.getOutputCol(), \n outputCol='normalized', \n withMean=True,\n withStd=True\n)\n\npipeline = Pipeline(stages=[vectorizer, normalizer])\ndata_standardized = pipeline.fit(data).transform(data)",
"Classification\nWe will now use the RandomForestClassfier to model the chances of survival for an infant.\nFirst, we need to cast the label feature to DoubleType.",
"import pyspark.sql.functions as func\n\nbirths = births.withColumn(\n 'INFANT_ALIVE_AT_REPORT', \n func.col('INFANT_ALIVE_AT_REPORT').cast(typ.DoubleType())\n)\n\nbirths_train, births_test = births \\\n .randomSplit([0.7, 0.3], seed=666)",
"We are ready to build our model.",
"classifier = cl.RandomForestClassifier(\n numTrees=5, \n maxDepth=5, \n labelCol='INFANT_ALIVE_AT_REPORT')\n\npipeline = Pipeline(\n stages=[\n encoder,\n featuresCreator, \n classifier])\n\nmodel = pipeline.fit(births_train)\ntest = model.transform(births_test)",
"Let's now see how the RandomForestClassifier model performs compared to the LogisticRegression.",
"evaluator = ev.BinaryClassificationEvaluator(\n labelCol='INFANT_ALIVE_AT_REPORT')\nprint(evaluator.evaluate(test, \n {evaluator.metricName: \"areaUnderROC\"}))\nprint(evaluator.evaluate(test, \n {evaluator.metricName: \"areaUnderPR\"}))",
"Let's test how well would one tree do, then.",
"classifier = cl.DecisionTreeClassifier(\n maxDepth=5, \n labelCol='INFANT_ALIVE_AT_REPORT')\npipeline = Pipeline(stages=[\n encoder,\n featuresCreator, \n classifier]\n)\n\nmodel = pipeline.fit(births_train)\ntest = model.transform(births_test)\n\nevaluator = ev.BinaryClassificationEvaluator(\n labelCol='INFANT_ALIVE_AT_REPORT')\nprint(evaluator.evaluate(test, \n {evaluator.metricName: \"areaUnderROC\"}))\nprint(evaluator.evaluate(test, \n {evaluator.metricName: \"areaUnderPR\"}))",
"Clustering\nIn this example we will use k-means model to find similarities in the births data.",
"import pyspark.ml.clustering as clus\n\nkmeans = clus.KMeans(k = 5, \n featuresCol='features')\n\npipeline = Pipeline(stages=[\n encoder,\n featuresCreator, \n kmeans]\n)\n\nmodel = pipeline.fit(births_train)",
"Having estimated the model, let's see if we can find some differences between clusters.",
"test = model.transform(births_test)\n\ntest \\\n .groupBy('prediction') \\\n .agg({\n '*': 'count', \n 'MOTHER_HEIGHT_IN': 'avg'\n }).collect()",
"In the field of NLP, problems such as topic extract rely on clustering to detect documents with similar topics. First, let's create our dataset.",
"text_data = spark.createDataFrame([\n ['''To make a computer do anything, you have to write a \n computer program. To write a computer program, you have \n to tell the computer, step by step, exactly what you want \n it to do. The computer then \"executes\" the program, \n following each step mechanically, to accomplish the end \n goal. When you are telling the computer what to do, you \n also get to choose how it's going to do it. That's where \n computer algorithms come in. The algorithm is the basic \n technique used to get the job done. Let's follow an \n example to help get an understanding of the algorithm \n concept.'''],\n ['''Laptop computers use batteries to run while not \n connected to mains. When we overcharge or overheat \n lithium ion batteries, the materials inside start to \n break down and produce bubbles of oxygen, carbon dioxide, \n and other gases. Pressure builds up, and the hot battery \n swells from a rectangle into a pillow shape. Sometimes \n the phone involved will operate afterwards. Other times \n it will die. And occasionally—kapow! To see what's \n happening inside the battery when it swells, the CLS team \n used an x-ray technology called computed tomography.'''],\n ['''This technology describes a technique where touch \n sensors can be placed around any side of a device \n allowing for new input sources. The patent also notes \n that physical buttons (such as the volume controls) could \n be replaced by these embedded touch sensors. In essence \n Apple could drop the current buttons and move towards \n touch-enabled areas on the device for the existing UI. It \n could also open up areas for new UI paradigms, such as \n using the back of the smartphone for quick scrolling or \n page turning.'''],\n ['''The National Park Service is a proud protector of \n America’s lands. Preserving our land not only safeguards \n the natural environment, but it also protects the \n stories, cultures, and histories of our ancestors. As we \n face the increasingly dire consequences of climate \n change, it is imperative that we continue to expand \n America’s protected lands under the oversight of the \n National Park Service. Doing so combats climate change \n and allows all American’s to visit, explore, and learn \n from these treasured places for generations to come. It \n is critical that President Obama acts swiftly to preserve \n land that is at risk of external threats before the end \n of his term as it has become blatantly clear that the \n next administration will not hold the same value for our \n environment over the next four years.'''],\n ['''The National Park Foundation, the official charitable \n partner of the National Park Service, enriches America’s \n national parks and programs through the support of \n private citizens, park lovers, stewards of nature, \n history enthusiasts, and wilderness adventurers. \n Chartered by Congress in 1967, the Foundation grew out of \n a legacy of park protection that began over a century \n ago, when ordinary citizens took action to establish and \n protect our national parks. Today, the National Park \n Foundation carries on the tradition of early park \n advocates, big thinkers, doers and dreamers—from John \n Muir and Ansel Adams to President Theodore Roosevelt.'''],\n ['''Australia has over 500 national parks. Over 28 \n million hectares of land is designated as national \n parkland, accounting for almost four per cent of \n Australia's land areas. In addition, a further six per \n cent of Australia is protected and includes state \n forests, nature parks and conservation reserves.National \n parks are usually large areas of land that are protected \n because they have unspoilt landscapes and a diverse \n number of native plants and animals. This means that \n commercial activities such as farming are prohibited and \n human activity is strictly monitored.''']\n], ['documents'])",
"First, we will once again use the RegexTokenizer and the StopWordsRemover models.",
"tokenizer = ft.RegexTokenizer(\n inputCol='documents', \n outputCol='input_arr', \n pattern='\\s+|[,.\\\"]')\n\nstopwords = ft.StopWordsRemover(\n inputCol=tokenizer.getOutputCol(), \n outputCol='input_stop')",
"Next in our pipeline is the CountVectorizer.",
"stringIndexer = ft.CountVectorizer(\n inputCol=stopwords.getOutputCol(), \n outputCol=\"input_indexed\")\n\ntokenized = stopwords \\\n .transform(\n tokenizer\\\n .transform(text_data)\n )\n \nstringIndexer \\\n .fit(tokenized)\\\n .transform(tokenized)\\\n .select('input_indexed')\\\n .take(2)",
"We will use the LDA model - the Latent Dirichlet Allocation model - to extract the topics.",
"clustering = clus.LDA(k=2, optimizer='online', featuresCol=stringIndexer.getOutputCol())",
"Put these puzzles together.",
"pipeline = Pipeline(stages=[\n tokenizer, \n stopwords,\n stringIndexer, \n clustering]\n)",
"Let's see if we have properly uncovered the topics.",
"topics = pipeline \\\n .fit(text_data) \\\n .transform(text_data)\n\ntopics.select('topicDistribution').collect()",
"Regression\nIn this section we will try to predict the MOTHER_WEIGHT_GAIN.",
"features = ['MOTHER_AGE_YEARS','MOTHER_HEIGHT_IN',\n 'MOTHER_PRE_WEIGHT','DIABETES_PRE',\n 'DIABETES_GEST','HYP_TENS_PRE', \n 'HYP_TENS_GEST', 'PREV_BIRTH_PRETERM',\n 'CIG_BEFORE','CIG_1_TRI', 'CIG_2_TRI', \n 'CIG_3_TRI'\n ]",
"First, we will collate all the features together and use the ChiSqSelector to select only the top 6 most important features.",
"featuresCreator = ft.VectorAssembler(\n inputCols=[col for col in features[1:]], \n outputCol='features'\n)\n\nselector = ft.ChiSqSelector(\n numTopFeatures=6, \n outputCol=\"selectedFeatures\", \n labelCol='MOTHER_WEIGHT_GAIN'\n)",
"In order to predict the weight gain we will use the gradient boosted trees regressor.",
"import pyspark.ml.regression as reg\n\nregressor = reg.GBTRegressor(\n maxIter=15, \n maxDepth=3,\n labelCol='MOTHER_WEIGHT_GAIN')",
"Finally, again, we put it all together into a Pipeline.",
"pipeline = Pipeline(stages=[\n featuresCreator, \n selector,\n regressor])\n\nweightGain = pipeline.fit(births_train)",
"Having created the weightGain model, let's see if it performs well on our testing data.",
"evaluator = ev.RegressionEvaluator(\n predictionCol=\"prediction\", \n labelCol='MOTHER_WEIGHT_GAIN')\n\nprint(evaluator.evaluate(\n weightGain.transform(births_test), \n {evaluator.metricName: 'r2'}))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chengsoonong/mclass-sky | mclearn/knfst/python/test.ipynb | bsd-3-clause | [
"import numpy as np\nimport scipy as sp\nimport pandas as pd\nimport urllib.request\nimport os\nimport shutil\nimport tarfile\nimport matplotlib.pyplot as plt\nfrom sklearn import datasets, cross_validation, metrics\nfrom sklearn.preprocessing import KernelCenterer\n\n%matplotlib notebook",
"First we need to download the Caltech256 dataset.",
"DATASET_URL = r\"http://homes.esat.kuleuven.be/~tuytelaa/\"\\\n\"unsup/unsup_caltech256_dense_sift_1000_bow.tar.gz\"\nDATASET_DIR = \"../../../projects/weiyen/data\"\n\nfilename = os.path.split(DATASET_URL)[1]\ndest_path = os.path.join(DATASET_DIR, filename)\n\nif os.path.exists(dest_path):\n print(\"{} exists. Skipping download...\".format(dest_path))\nelse:\n with urllib.request.urlopen(DATASET_URL) as response, open(dest_path, 'wb') as out_file:\n shutil.copyfileobj(response, out_file)\n print(\"Dataset downloaded. Extracting files...\")\n\ntar = tarfile.open(dest_path)\ntar.extractall(path=DATASET_DIR)\nprint(\"Files extracted.\")\ntar.close()\n\npath = os.path.join(DATASET_DIR, \"bow_1000_dense/\")",
"Calculate multi-class KNFST model for multi-class novelty detection\nINPUT\n K: NxN kernel matrix containing similarities of n training samples\n labels: Nx1 column vector containing multi-class labels of N training samples\n\nOUTPUT\n proj: Projection of KNFST\n target_points: The projections of training data into the null space\n\nLoad the dataset into memory",
"ds = datasets.load_files(path)\nds.data = np.vstack([np.fromstring(txt, sep='\\t') for txt in ds.data])\n\n\ndata = ds.data\ntarget = ds.target",
"Select a few \"known\" classes",
"classes = np.unique(target)\nnum_class = len(classes)\nnum_known = 5\n\nknown = np.random.choice(classes, num_known)\nmask = np.array([y in known for y in target])\n\nX_train = data[mask]\ny_train = target[mask]\n\nidx = y_train.argsort()\nX_train = X_train[idx]\ny_train = y_train[idx]\n\nprint(X_train.shape)\nprint(y_train.shape)\n\ndef _hik(x, y):\n '''\n Implements the histogram intersection kernel.\n '''\n return np.minimum(x, y).sum()\n\n\nfrom scipy.linalg import svd\n\ndef nullspace(A, eps=1e-12):\n u, s, vh = svd(A)\n null_mask = (s <= eps)\n null_space = sp.compress(null_mask, vh, axis=0)\n return sp.transpose(null_space)\n\nA = np.array([[2,3,5],[-4,2,3],[0,0,0]])\nnp.array([-4,2,3]).dot(nullspace(A))",
"Train the model, and obtain the projection and class target points.",
"def learn(K, labels):\n classes = np.unique(labels)\n if len(classes) < 2:\n raise Exception(\"KNFST requires 2 or more classes\")\n n, m = K.shape\n if n != m:\n raise Exception(\"Kernel matrix must be quadratic\")\n \n centered_k = KernelCenterer().fit_transform(K)\n \n basis_values, basis_vecs = np.linalg.eigh(centered_k)\n \n basis_vecs = basis_vecs[:,basis_values > 1e-12]\n basis_values = basis_values[basis_values > 1e-12]\n \n basis_values = np.diag(1.0/np.sqrt(basis_values))\n\n basis_vecs = basis_vecs.dot(basis_values)\n\n L = np.zeros([n,n])\n for cl in classes:\n for idx1, x in enumerate(labels == cl):\n for idx2, y in enumerate(labels == cl):\n if x and y:\n L[idx1, idx2] = 1.0/np.sum(labels==cl)\n M = np.ones([m,m])/m\n H = (((np.eye(m,m)-M).dot(basis_vecs)).T).dot(K).dot(np.eye(n,m)-L)\n \n t_sw = H.dot(H.T)\n eigenvecs = nullspace(t_sw)\n if eigenvecs.shape[1] < 1:\n eigenvals, eigenvecs = np.linalg.eigh(t_sw)\n \n eigenvals = np.diag(eigenvals)\n min_idx = eigenvals.argsort()[0]\n eigenvecs = eigenvecs[:, min_idx]\n proj = ((np.eye(m,m)-M).dot(basis_vecs)).dot(eigenvecs)\n target_points = []\n for cl in classes:\n k_cl = K[labels==cl, :] \n pt = np.mean(k_cl.dot(proj), axis=0)\n target_points.append(pt)\n \n return proj, np.array(target_points)\n\nkernel_mat = metrics.pairwise_kernels(X_train, metric=_hik)\nproj, target_points = learn(kernel_mat, y_train)\n\ndef squared_euclidean_distances(x, y):\n n = np.shape(x)[0]\n m = np.shape(y)[0]\n distmat = np.zeros((n,m))\n \n for i in range(n):\n for j in range(m):\n buff = x[i,:] - y[j,:]\n distmat[i,j] = buff.dot(buff.T)\n return distmat\n\ndef assign_score(proj, target_points, ks):\n projection_vectors = ks.T.dot(proj)\n sq_dist = squared_euclidean_distances(projection_vectors, target_points)\n scores = np.sqrt(np.amin(sq_dist, 1))\n return scores\n\n\n\nauc_scores = []\nclasses = np.unique(target)\nnum_known = 5\nfor n in range(20):\n num_class = len(classes)\n known = np.random.choice(classes, num_known)\n mask = np.array([y in known for y in target])\n\n X_train = data[mask]\n y_train = target[mask]\n \n idx = y_train.argsort()\n X_train = X_train[idx]\n y_train = y_train[idx]\n \n sample_idx = np.random.randint(0, len(data), size=1000)\n X_test = data[sample_idx,:]\n y_labels = target[sample_idx]\n\n # Test labels are 1 if novel, otherwise 0.\n y_test = np.array([1 if cl not in known else 0 for cl in y_labels])\n \n # Train model\n kernel_mat = metrics.pairwise_kernels(X_train, metric=_hik)\n proj, target_points = learn(kernel_mat, y_train)\n \n # Test\n ks = metrics.pairwise_kernels(X_train, X_test, metric=_hik)\n scores = assign_score(proj, target_points, ks)\n auc = metrics.roc_auc_score(y_test, scores)\n print(\"AUC:\", auc)\n auc_scores.append(auc)\n\n \n\n\nfpr, tpr, thresholds = metrics.roc_curve(y_test, scores)\n\nplt.figure()\nplt.plot(fpr, tpr, label='ROC curve')\nplt.plot([0, 1], [0, 1], 'k--')\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC Curve of the KNFST Novelty Classifier')\nplt.legend(loc=\"lower right\")\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kimkipyo/dss_git_kkp | 통계, 머신러닝 복습/160620월_17일차_나이브 베이즈 Naive Bayes/2.실전 예제.ipynb | mit | [
"베르누이의 경우 실습 예제",
"X = np.array([[1,0,0],[1,1,1], [0,1,1],[0,1,0],[0,0,1],[1,1,1]])\ny0 = np.zeros(2)\ny1 = np.ones(4)\ny = np.hstack([y0, y1])\nprint(X)\nprint(y)\n\nfrom sklearn.naive_bayes import BernoulliNB\nclf_bern = BernoulliNB().fit(X, y)\n\nclf_bern.classes_\n\nclf_bern.class_count_\n\nfc = clf_bern.feature_count_\nfc\n\nfc / np.repeat(clf_bern.class_count_[:, np.newaxis], 3, axis=1)\n\nx_new = np.array([1,1,0])\n\nclf_bern.predict_proba([x_new])\n\nnp.exp(clf_bern.feature_log_prob_)\n\ntheta = np.exp(clf_bern.feature_log_prob_) #자동적으로 스무딩\ntheta\n\np = ((theta**x_new)*(1-theta)**(1-x_new)).prod(axis=1)*np.exp(clf_bern.class_log_prior_)\np / p.sum()",
"X값 살짝 바뀐 경우(스무딩을 써야 하는 경우)",
"X1 = np.array([[1,0,0],[1,0,1], [0,1,1],[0,1,0],[0,0,1],[1,1,1]])\ny01 = np.zeros(2)\ny11 = np.ones(4)\ny1 = np.hstack([y01, y11])\n\nclf_bern1 = BernoulliNB().fit(X1, y1)\n\nfc1 = clf_bern1.feature_count_\nfc1\n\nnp.repeat(clf_bern1.class_count_[:, np.newaxis], 3, axis=1)\n\nfc1 / np.repeat(clf_bern1.class_count_[:, np.newaxis], 3, axis=1)\n\nclf_bern1.predict_proba([x_new])\n\nnp.exp(clf_bern1.feature_log_prob_)\n\ntheta = np.exp(clf_bern1.feature_log_prob_) \ntheta \n\np = ((theta**x_new)*(1-theta)**(1-x_new)).prod(axis=1)*np.exp(clf_bern1.class_log_prior_)\np / p.sum()",
"다항의 경우 실습 예제",
"X = np.array([[4,4,2],[4,3,3], [6,3,1],[4,6,0],[0,4,1],[1,3,1],[1,1,3],[0,3,2]])\ny0 = np.zeros(4)\ny1 = np.ones(4)\ny = np.hstack([y0, y1])\nprint(X)\nprint(y)\n\nfrom sklearn.naive_bayes import MultinomialNB\nclf_mult = MultinomialNB().fit(X, y)\n\nclf_mult.classes_\n\nclf_mult.class_count_\n\nfc = clf_mult.feature_count_\nfc\n\nnp.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1)\n\nfc / np.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1)\n\nclf_mult.alpha\n\n(fc + clf_mult.alpha) / (np.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1) + clf_mult.alpha * X.shape[1])\n\nnp.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1) + clf_mult.alpha * X.shape[1]\n\nx_new1 = np.array([1,1,1])\nclf_mult.predict_proba([x_new1])\n\nx_new2 = np.array([2,2,2])\nclf_mult.predict_proba([x_new2])\n\nx_new3 = np.array([3,3,3])\nclf_mult.predict_proba([x_new3])",
"문제1\n\nfeature와 target이 다음과 같을 때, 베르누이 나이브 베이지안 방법을 사용하여 다음 문제를 푸세요.",
"X = np.array([\n [1, 0, 0],\n [1, 0, 1],\n [0, 0, 1],\n [0, 0, 0],\n [1, 1, 1],\n [0, 1, 1],\n [0, 0, 1],\n [0, 1, 0],\n ])\ny = np.array([0,0,0,0,1,1,1,1])",
"(1) 사전 분포(prior) p(y)를 구하세요.\n\np(y=0) = 0.5\np(y=1) = 0.5",
"py0, py1 = (y==0).sum()/len(y), (y==1).sum()/len(y)\npy0, py1",
"(2) 스무딩 벡터 알파=0 일 때, 다음 x_new에 대해 우도(likelihood)함수 p(x|y)를 구하고 조건부 확률 분포 p(y|x)를 구하세요.(normalize 된 값이 아님!)\n* x_new = [1 1 0]\n<img src=\"1.png.jpg\" style=\"width:70%; margin: 0 auto 0 auto;\">",
"x_new = np.array([1, 1, 0])\n\ntheta0 = X[y==0, :].sum(axis=0)/len(X[y==0, :])\ntheta0\n\ntheta1 = X[y==1, :].sum(axis=0)/len(X[y==1, :])\ntheta1\n\nlikelihood0 = (theta0**x_new).prod()*((1-theta0)**(1-x_new)).prod()\nlikelihood0\n\nlikelihood1 = (theta1**x_new).prod()*((1-theta1)**(1-x_new)).prod()\nlikelihood1\n\npx = likelihood0 * py0 + likelihood1 * py1\npx\n\nlikelihood0 * py0 / px, likelihood1 * py1 / px\n\nfrom sklearn.naive_bayes import BernoulliNB\nmodel = BernoulliNB(alpha=0).fit(X, y)\nmodel.predict_proba([x_new])",
"(3) 스무딩 팩터 알파=0.5일 때, 문제(2)를 다시 풀어보세요.\n<img src=\"22.png.jpg\" style=\"width:70%; margin: 0 auto 0 auto;\">",
"theta0 = (X[y==0, :].sum(axis=0) + 0.5*np.ones(3))/(len(X[y==0,:])+1)\ntheta0\n\ntheta1 = (X[y==1, :].sum(axis=0) + 0.5*np.ones(3))/(len(X[y==1,:])+1)\ntheta1\n\nx_new = np.array([1, 1, 0])\n\nlikelihood0 = (theta0**x_new).prod()*((1-theta0)**(1-x_new)).prod()\nlikelihood0\n\nlikelihood1 = (theta1**x_new).prod()*((1-theta1)**(1-x_new)).prod()\nlikelihood1\n\npx = likelihood0 * py0 + likelihood1 * py1\npx\n\nlikelihood0 * py0 / px, likelihood1 * py1 / px\n\nfrom sklearn.naive_bayes import BernoulliNB\nmodel = BernoulliNB(alpha=0.5).fit(X, y)\nmodel.predict_proba([x_new])",
"문제2\n문제 1을 다항 나이브 베이지안(Multinomial Naive Bayesian) 방법을 사용하여 (1), (2), (3)을 다시 풀어보세요\n(1) 사전 분포(prior) p(y)를 구하세요.\n\np(y = 0) = 0.5\np(y = 1) = 0.5\n\n(2) 스무딩 팩터 알파=0 일 때, 다음 x_new에 대해 우도(likelihood)함수 p(x|y)를 구하고 조건부 확률 분포 p(y|x)를 구하세요.(normalize 된 값이 아님!)\n* x_new = [2 3 1]\n<img src=\"3.png.jpg\" style=\"width:70%; margin: 0 auto 0 auto;\">",
"x_new = np.array([2, 3, 1])\n\ntheta0 = X[y==0, :].sum(axis=0)/X[y==0, :].sum()\ntheta0\n\ntheta1 = X[y==1, :].sum(axis=0)/X[y==1, :].sum()\ntheta1\n\nlikelihood0 = (theta0**x_new).prod()\nlikelihood0\n\nlikelihood1 = (theta1**x_new).prod()\nlikelihood1\n\npx = likelihood0 * py0 + likelihood1 * py1\npx\n\nlikelihood0 * py0 / px, likelihood1 * py1 / px\n\nfrom sklearn.naive_bayes import MultinomialNB\nmodel = MultinomialNB(alpha=0).fit(X, y)\nmodel.predict_proba([x_new])",
"(3) 스무딩 팩터 알파=0.5일 때, 문제(2)를 다시 풀어보세요.\n<img src=\"4.png.jpg\" style=\"width:70%; margin: 0 auto 0 auto;\">",
"theta0 = (X[y==0, :].sum(axis=0) + 0.5*np.ones(3))/ (X[y==0, :].sum() + 1.5)\ntheta0\n\ntheta1 = (X[y==1, :].sum(axis=0) + 0.5*np.ones(3))/ (X[y==1, :].sum() + 1.5)\ntheta1\n\nlikelihood0 = (theta0**x_new).prod()\nlikelihood0\n\nlikelihood1 = (theta1**x_new).prod()\nlikelihood1\n\npx = likelihood0 * py0 + likelihood1 * py1\npx\n\nlikelihood0 * py0 / px, likelihood1 * py1 / px\n\nfrom sklearn.naive_bayes import MultinomialNB\nmodel = MultinomialNB(alpha=0.5).fit(X, y)\nmodel.predict_proba([x_new])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adamsteer/nci-notebooks | .ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb | apache-2.0 | [
"What is the proposed task:\n\ningest some liDAR points into a HDF file\ningest the aircraft trajectory into the file\nanything else\n\n...and then extract data from the HDF file at different rates using a spatial 'query'\nWhat do the data look like?\nASCII point clouds with the following attributes, currently used as a full M x N array:\n- time (GPS second of week, float)\n- X coordinate (UTM metres, float)\n- Y coordinate (UTM metres, float)\n- Z coordinate (ellipsoidal height, float)\n- return intensity (unscaled, float)\n- scan angle (degrees, float)\nEverything above is easily stored in the binary .LAS format (or .LAZ). It is kept in ASCII because the following additional data have no slots in .LAS:\n\nX uncertainty (aircraft reference frame, m, float)\nY uncertainty (aircraft reference frame, m, float)\nZ uncertainty (aircraft reference frame, m, float)\n3D uncertainty (metres, float)\n\n...and optionally (depending on the use case):\n\naircraft trajectory height (to ITRF08, metres)\naicraft position uncertainty X (metres, relative to aircraft position)\naircraft and sensor attributes\n\n...and derived data:\n\nsea ice elevations (m, float)\nestimted snow depths (m, float)\nestimated snow depth uncertainty (m, float)\nestimated ice thickness (m, float)\nestimated ice thickness uncertainty (m, float)\n\nSo, you can see how quickly .LAS loses it's value. ASCII point clouds are conceptually simple, but very big - and not well suited to use in a HPC context. Too much storage overhead, and you have to read the entire file in order to extract a subset. Six million points gets to around 50MB, it's pretty inefficient.\nSo lets look at about 6 million points...\nLet's look at a small set of points\nScenario: A small set of LiDAR and 3D photogrammetry points collected adjacent to a ship (RSV Aurora Australia) parked in sea ice.\nSource: AAD's APPLS system (http://seaice.acecrc.org.au/crrs/)\nhttps://data.aad.gov.au/aadc/metadata/metadata.cfm?entry_id=SIPEX_II_RAPPLS\nhttps://data.aad.gov.au/aadc/metadata/metadata.cfm?entry_id=SIPEX_LiDAR_sea_ice\nPretty cover photo:\n<img src=\"http://seaice.acecrc.org.au/wp-content/uploads/2013/09/geometry2.png\">",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import LogNorm\n%matplotlib inline\n#import plot_lidar\nfrom datetime import datetime",
"import a LiDAR swath",
"swath = np.genfromtxt('../../PhD/python-phd/swaths/is6_f11_pass1_aa_nr2_522816_523019_c.xyz')\n\nimport pandas as pd\n\ncolumns = ['time', 'X', 'Y', 'Z', 'I','A', 'x_u', 'y_u', 'z_u', '3D_u']\nswath = pd.DataFrame(swath, columns=columns)\n\nswath[1:5]",
"Now load up the aircraft trajectory",
"air_traj = np.genfromtxt('../../PhD/is6_f11/trajectory/is6_f11_pass1_local_ice_rot.3dp')\n\ncolumns = ['time', 'X', 'Y', 'Z', 'R', 'P', 'H', 'x_u', 'y_u', 'z_u', 'r_u', 'p_u', 'h_u']\nair_traj = pd.DataFrame(air_traj, columns=columns)\n\nair_traj[1:5]",
"take a quick look at the data",
"fig = plt.figure(figsize = ([30/2.54, 6/2.54]))\nax0 = fig.add_subplot(111) \na0 = ax0.scatter(swath['Y'], swath['X'], c=swath['Z'] - np.min(swath['Z']), cmap = 'gist_earth',\n vmin=0, vmax=10, edgecolors=None,lw=0, s=0.6)\na1 = ax0.scatter(air_traj['Y'], air_traj['X'], c=air_traj['Z'], cmap = 'Reds',\n lw=0, s=1)\nplt.tight_layout()",
"Making a HDF file out of those points",
"import h5py\n\n#create a file instance, with the intention to write it out\nlidar_test = h5py.File('lidar_test.hdf5', 'w')\n\nswath_data = lidar_test.create_group('swath_data')\n\nswath_data.create_dataset('GPS_SOW', data=swath['time'])\n\n#some data\nswath_data.create_dataset('UTM_X', data=swath['X'])\nswath_data.create_dataset('UTM_Y', data=swath['Y'])\nswath_data.create_dataset('Z', data=swath['Z'])\nswath_data.create_dataset('INTENS', data=swath['I'])\nswath_data.create_dataset('ANGLE', data=swath['A'])\nswath_data.create_dataset('X_UNCERT', data=swath['x_u'])\nswath_data.create_dataset('Y_UNCERT', data=swath['y_u'])\nswath_data.create_dataset('Z_UNCERT', data=swath['z_u'])\nswath_data.create_dataset('3D_UNCERT', data=swath['3D_u'])\n\n#some attributes\nlidar_test.attrs['file_name'] = 'lidar_test.hdf5'\n\nlidar_test.attrs['codebase'] = 'https://github.com/adamsteer/matlab_LIDAR'",
"That's some swath data, now some trajectory data at a different sampling rate",
"traj_data = lidar_test.create_group('traj_data')\n\n#some attributes\ntraj_data.attrs['flight'] = 11\ntraj_data.attrs['pass'] = 1\ntraj_data.attrs['source'] = 'RAPPLS flight 11, SIPEX-II 2012'\n\n#some data\ntraj_data.create_dataset('pos_x', data = air_traj['X'])\ntraj_data.create_dataset('pos_y', data = air_traj['Y'])\ntraj_data.create_dataset('pos_z', data = air_traj['Z'])",
"close and write the file out",
"lidar_test.close()",
"OK, that's an arbitrary HDF file built\nThe generated file is substantially smaller than the combined sources - 158 MB from 193, with no attention paid to optimisation.\nThe .LAZ version of the input text file here is 66 MB. More compact, but we can't query it directly - and we have to fake fields! Everything in the swath dataset can be stored, but we need to pretend uncertainties are RGB, so if person X comes along and doesn't read the metadata well, they get crazy colours, call us up and complain. Or we need to use .LAZ extra bits, and deal with awkward ways of describing things.\nIt's also probably a terrible HDF, with no respect to CF compliance at all. That's to come :)\nAnd now we add some 3D photogrammetry at about 80 points/m^2:",
"photo = np.genfromtxt('/Users/adam/Documents/PhD/is6_f11/photoscan/is6_f11_photoscan_Cloud.txt',skip_header=1)\n\ncolumns = ['X', 'Y', 'Z', 'R', 'G', 'B']\nphoto = pd.DataFrame(photo[:,0:6], columns=columns)\n\n#create a file instance, with the intention to write it out\nlidar_test = h5py.File('lidar_test.hdf5', 'r+')\n\nphoto_data = lidar_test.create_group('3d_photo')\n\nphoto_data.create_dataset('UTM_X', data=photo['X'])\nphoto_data.create_dataset('UTM_Y', data=photo['Y'])\nphoto_data.create_dataset('Z', data=photo['Z'])\nphoto_data.create_dataset('R', data=photo['R'])\nphoto_data.create_dataset('G', data=photo['G'])\nphoto_data.create_dataset('B', data=photo['B'])\n\n#del lidar_test['3d_photo']\n\nlidar_test.close()",
"Storage is a bit less efficient here.\n\nASCII cloud: 2.1 GB\n.LAZ format with same data: 215 MB\nHDF file containing LiDAR, trajectory, 3D photo cloud: 1.33 GB\n\nSo, there's probably a case for keeping super dense clouds in different files (along with all their ancillary data). Note that .LAZ is able to store all the data used for the super dense cloud here. But - how do we query it efficiently?\nAlso, this is just a demonstration, so we push on!\nnow, lets look at the HDF file... and get stuff",
"from netCDF4 import Dataset\n\nthedata = Dataset('lidar_test.hdf5', 'r')\n\nthedata",
"There are the two groups - swath_data and traj_data",
"swath = thedata['swath_data']\n\nswath\n\nutm_xy = np.column_stack((swath['UTM_X'],swath['UTM_Y']))\n\nidx = np.where((utm_xy[:,0] > -100) & (utm_xy[:,0] < 200) & (utm_xy[:,1] > -100) & (utm_xy[:,1] < 200) )\n\nchunk_z = swath['Z'][idx]\nchunk_z.size\n\nmax(chunk_z)\n\nchunk_x = swath['UTM_X'][idx]\nchunk_x.size\n\nchunk_y = swath['UTM_Y'][idx]\nchunk_y.size\n\nchunk_uncert = swath['Z_UNCERT'][idx]\nchunk_uncert.size\n\nplt.scatter(chunk_x, chunk_y, c=chunk_z, lw=0, s=2)",
"That gave us a small chunk of LIDAR points, without loading the whole point dataset. Neat!\n...but being continually dissatisfied, we want more! Lets get just the corresponding trajectory:",
"traj = thedata['traj_data']\n\ntraj",
"Because there's essentiually no X extent for flight data, only the Y coordinate of the flight data are needed...",
"pos_y = traj['pos_y']\n\nidx = np.where((pos_y[:] > -100.) & (pos_y[:] < 200.))\n\ncpos_x = traj['pos_x'][idx]\n\ncpos_y = traj['pos_y'][idx]\n\ncpos_z = traj['pos_z'][idx]",
"Now plot the flight line and LiDAR together",
"plt.scatter(chunk_x, chunk_y, c=chunk_z, lw=0, s=3, cmap='gist_earth')\nplt.scatter(cpos_x, cpos_y, c=cpos_z, lw=0, s=5, cmap='Oranges')",
"...and prove that we are looking at a trajectory and some LiDAR",
"from mpl_toolkits.mplot3d import Axes3D\n\n#set up a plot\nplt_az=310\nplt_elev = 40.\nplt_s = 3\ncb_fmt = '%.1f'\n\ncmap1 = plt.get_cmap('gist_earth', 10)\n\n#make a plot\nfig = plt.figure()\nfig.set_size_inches(35/2.51, 20/2.51)\nax0 = fig.add_subplot(111, projection='3d')\na0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))*2,\n c=np.ndarray.tolist((chunk_z-min(chunk_z))*2),\\\n cmap=cmap1,lw=0, vmin = -0.5, vmax = 5, s=plt_s)\nax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\\\n cmap='hot', lw=0, vmin = 250, vmax = 265, s=10)\nax0.view_init(elev=plt_elev, azim=plt_az)\nplt.tight_layout()",
"plot coloured by point uncertainty",
"#set up a plot\nplt_az=310\nplt_elev = 40.\nplt_s = 3\ncb_fmt = '%.1f'\n\ncmap1 = plt.get_cmap('gist_earth', 30)\n\n#make a plot\nfig = plt.figure()\nfig.set_size_inches(35/2.51, 20/2.51)\nax0 = fig.add_subplot(111, projection='3d')\na0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))*2,\n c=np.ndarray.tolist(chunk_uncert),\\\n cmap=cmap1, lw=0, vmin = 0, vmax = 0.2, s=plt_s)\nax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\\\n cmap='hot', lw=0, vmin = 250, vmax = 265, s=10)\nax0.view_init(elev=plt_elev, azim=plt_az)\nplt.tight_layout()\nplt.savefig('thefig.png')",
"now pull in the photogrammetry cloud\nThis gets a little messy, since it appears we still need to grab X and Y dimensions - so still 20 x 10^6 x 2 points. Better than 20 x 10^6 x 6, but I wonder if I'm missing something about indexing.",
"photo = thedata['3d_photo']\n\nphoto\n\nphoto_xy = np.column_stack((photo['UTM_X'],photo['UTM_Y']))\n\nidx_p = np.where((photo_xy[:,0] > 0) & (photo_xy[:,0] < 100) & (photo_xy[:,1] > 0) & (photo_xy[:,1] < 100) )\n\nplt.scatter(photo['UTM_X'][idx_p], photo['UTM_Y'][idx_p], c = photo['Z'][idx_p],\\\n cmap='hot',vmin=-1, vmax=1, lw=0, s=plt_s)\n\np_x = photo['UTM_X'][idx_p]\np_y = photo['UTM_Y'][idx_p]\np_z = photo['Z'][idx_p]\n\nplt_az=310\nplt_elev = 70.\nplt_s = 2\n\n#make a plot\nfig = plt.figure()\nfig.set_size_inches(25/2.51, 10/2.51)\nax0 = fig.add_subplot(111, projection='3d')\n\n#LiDAR points\nax0.scatter(chunk_x, chunk_y, chunk_z-50, \\\n c=np.ndarray.tolist(chunk_z),\\\n cmap=cmap1, vmin=-30, vmax=2, lw=0, s=plt_s)\n\n#3D photogrammetry pointd\nax0.scatter(p_x, p_y, p_z, \n c=np.ndarray.tolist(p_z),\\\n cmap='hot', vmin=-1, vmax=1, lw=0, s=5)\n\n#aicraft trajectory\nax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\\\n cmap='hot', lw=0, vmin = 250, vmax = 265, s=10)\n\n\nax0.view_init(elev=plt_elev, azim=plt_az)\nplt.tight_layout()\nplt.savefig('with_photo.png')",
"This is kind of a clunky plot - but you get the idea (I hope). LiDAR is in blues, the 100 x 100 photogrammetry patch in orange, trajectory in orange. Different data sources, different resolutions, extracted using pretty much the same set of queries.",
"print('LiDAR points: {0}\\nphotogrammetry points: {1}\\ntrajectory points: {2}'.\n format(len(chunk_x), len(p_x), len(cpos_x) ))",
"So what's happened here:\nUsing HDF as the storage medium, this worksheet has shown how to:\n\nput point clouds into HDF, storing many attributes that are not possible with .LAS\nstart on provenance data, including the trajectory that LiDAR points were generated from (this could be expanded to other properties, eg airborne GPS positions, IMU observations, survey marks, versions of trajectories, ...)\nquery for a point subset without loading all the point cloud\nand query for trajectory data in the same bounding box without loading the whole trajectory.\n\nWhat could be better\n\nCompression for bigger point clouds. .LAZ is obviously better, how can HDF/NetCDF storage happen more efficiently?\n\nnext steps:\n\nrecreate this sheet using a single NetCDF file per dataset, with proper metadata\nbreak up different point cloud sources (makes obvious sense, put them together here just because)\nrecreate this example using Davis Station data, which has topography\n\nuse cases to test:\n\nSay I want a region that is broken up across different flights/surveys/tiles. How can I get consistent data? and how can I find out if different regions of data have different uncertainties attached?\n???\n\nDreams:\n\nintegration with the plasio viewer (http://plas.io)? This is a webGL point cloud viewer, currently reading .LAZ files. I will contact the developers about reading from other formats. Obviously useful for data preview, but not an infrastructure priority."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
trsherborne/learn-python | lesson4.ipynb | mit | [
"LSESU Applicable Maths Python Lesson 4\n15/11/16\nToday we will be learning about\n* Data Structures - Official documentation on Data Structures here\n * Lists\n * Tuples\n * Dictionaries\n* Introduction to the Pandas library\n Recap from Week 3\n\nStrings\n\n```\nday = input('Enter the day of the month you were born')\nmonth = input('Enter the month of the year you were born')\nyear = input('Enter the year you were born')\nbirthday = '{} / {} / {}'.format(day,month,year)\nprint('Your birthday is \\n{}'.format(birthday))\n```\n* File I/O\n```\nfile = open('test_data.txt','r') \nfor line in file:\n print(line)\n```\n\nLists\n\n```\nlist1 = [1,2,3,4,5,6]\nfor item in reversed(list1):\n print(item)\n```\n\nList comprehensions\n\nsquares = [x**2 for x in range(1,11)]\nLists\nWe met lists in lesson 3 and are briefly going to go over them again to get in the mindset of looking at data structures in Python. A data structure is a type of object in Python which stores information in some organised format. The format of this organisation dictates what the data structure is. There are buckets of different kinds of data structures, but when writing Python you will primarily be using lists, dictionaries and tuples.",
"# Remember we declare an empty list as so\nmy_list = []\n\n# Add new elements to the end of a previous list\nmy_list.append(1)\nmy_list.append('Hello')\nmy_list.append(0.05)\n\n# Delete specific elements\ndel my_list[-1] # Remove by index\nmy_list.remove('Hello') # Remove by value\n\n# Replace elements\nmy_list[0] = 2\n\n# Or we can declare a whole, or part of, a list upon declaration\nshopping_list = ['bread','toothpaste','blueberries','milk']\n\n# Printing each element of a list\nfor item in shopping_list:\n print(item)\n \n# Find the length of a list\nprint('The shopping list has {} items'.format(len(shopping_list)))\n\n# We can test for membership in a list in the same fashion as a string\nif 'milk' in shopping_list:\n print('You can\\'t drink milk!')\n shopping_list.remove('milk')\n \nif 'chocolate' not in shopping_list:\n print('You forgot chocolate!')\n shopping_list.append('chocolate')\n \nprint(shopping_list)",
"Lists always have their order preserved in Python, so you can guarantee that shopping_list[0] will have the value \"bread\"\nTuples\nA tuple is another of the standard Python data strucure. They behave in a similar way to the list but have one key difference, they are immutable. Let's look at what this means.\nA more detailed intro to Tuples can be found here",
"# A tuple is declared with the curved brackets () instead of the [] for a list\nmy_tuple = (1,2,'cat','dog')\n\n# But since a tuple is immutable the next line will not run\nmy_tuple[0] = 4",
"So what can we learn from this? Once you declare a tuple, the object cannot be changed. \nFor this reason, tuples have more optimised methods when you use them so can be more efficient and faster in your code.\nA closer look at using Tuples",
"# A tuple might be immutable but can contain mutable objects\nmy_list_tuple = ([1,2,3],[4,5,6])\n\n# This won't work\n# my_list_tuple[0] = [3,2,1]\n\n# But this will!\nmy_list_tuple[0][0:3] = [3,2,1]\n\nprint(my_list_tuple)\n\n# You can add tuples together\nt1 = (1,2,3)\nt1 += (4,5,6)\nprint(t1)\n\nt2 = (10,20,30)\nt3 = (40,50,60)\nprint(t2+t3)\n\n# Use index() and count() to look at a tuple\nt1 = (1,2,3,1,1,2)\n\nprint(t1.index(2)) # Returns the first index of 2\n\nprint(t1.count(1)) # Returns how many 1's are in the tuple\n\n# You can use tuples for multiple assignments and for multiple return from functions\n\n(x,y,z) = (1,2,3)\nprint(x)\n\n\n# This is a basic function doing multiple return in Python\ndef norm_and_square(a):\n return a,a**2\n\n(a,b) = norm_and_square(4)\n\nprint(a)\nprint(b)\n\n# Swap items using tuples\n\nx = 10\ny = 20\nprint('x is {} and y is {}'.format(x,y))\n\n(x,y) = (y,x)\n\nprint('x is {} and y is {}'.format(x,y))",
"Question - Write a function which swaps two elements using tuples",
"# TO DO\ndef my_swap_function(a,b):\n # write here!\n return b,a\n\n# END TO DO\n\na = 1\nb = 2\nx = my_swap_function(a,b)\nprint(x)",
"Dictionaries\nDictionaries are perhaps the most useful and hardest to grasp data structure from the basic set in Python. Dictionaries are not iterable in the same sense as lists and tuples and using them required a different approach.\nDictionaries are sometimes called hash maps, hash tables or maps in other programming languages. You can think of a dictionary as the same as a physical dictionary, it is a collection of key (the word) and value (the definition) pairs. \nEach key is unique and has an associated value, the key functions as the index for the value but it can be anything. In contrast to alphabetical dictionaries, the order of a Python dictionary is not guaranteed.",
"# Declare a dictionary using the {} brackets or the dict() method\nmy_dict = {}\n\n# Add new items to the dictionary by stating the key as the index and the value\nmy_dict['bananas'] = 'this is a fruit and a berry'\nmy_dict['apples'] = 'this is a fruit'\nmy_dict['avocados'] = 'this is a berry'\n\nprint(my_dict)\n\n# So now we can use the key to get a value in the dictionary\nprint(my_dict['bananas'])\n\n# But this won't work if we haven't added an item to the dict\n#print(my_dict['cherries'])\n\n# We can fix this line using the get(key,def) method. This is safer as you wont get KeyError!\nprint(my_dict.get('cherries','Not found :('))\n\n# If you are given a dictionary data file you know nothing about you can inspect it like so\n\n# Get all the keys of a dictionary\nprint(my_dict.keys())\n\n# Get all the values from a dictionary\nprint(my_dict.values())\n\n# Of course you could print the whole dictionary, but it might be huge! These methods break\n# the dict down, but the downside is that you can't match up the keys and values!\n\n# Test for membership in the keys using the in operator\n\nif 'avocados' in my_dict:\n print(my_dict['avocados'])\n \n\n# Dictionary values can also be lists or other data structures\nmy_lists = {}\nmy_lists['shopping list'] = shopping_list\nmy_lists['holidays'] = ['Munich','Naples','New York','Tokyo','San Francisco','Los Angeles']\n\n# Now my I store a dictionary with each list named with keys and the lists as values\nprint(my_lists)",
"Wrapping everything up, we can create a list of dictionaries with multiple fields and iterate over a dictionary",
"# Declare a list\neurope = []\n\n# Create dicts and add to lists\ngermany = {\"name\": \"Germany\", \"population\": 81000000,\"speak_german\":True}\neurope.append(germany)\nluxembourg = {\"name\": \"Luxembourg\", \"population\": 512000,\"speak_german\":True}\neurope.append(luxembourg)\nuk = {\"name\":\"United Kingdom\",\"population\":64100000,\"speak_german\":False}\neurope.append(uk)\n\nprint(europe)\nprint()\n\nfor country in europe:\n for key, value in country.items():\n print('{}\\t{}'.format(key,value))\n print()",
"Question - Add at least 3 more countries to the europe list and use a for loop to get a new list of every country which speaks German",
"# TO DO - You might need more than just a for loop!\n\n# END TO DO ",
"A peek at Pandas\nWe've seen some of the standard library of Data structures in Python. We will briefly look at Pandas now, a powerful data manipulation library which is a sensible next step to organising your data when you need to use something more complex than standard Python data structures.\nThe core of Pandas is the DataFrame, which will look familiar if you have worked with R before. This organises data in a table format and gives you spreadsheet like handling of your information. Using Pandas can make your job handling data easier, and many libraries for plotting data (such as Seaborn) can handle a Pandas DataFrame much easier than a list as input.\nNote: Pandas uses NumPy under the hood, another package for simplifying numerical operations and working with arrays. We will look at NumPy and Pandas together in 2 lessons time.",
"# We import the Pandas packages using the import statement we've seen before\n\nimport pandas as pd\n\n# To create a Pandas DataFrame from a simpler data structure we use the following routine\n\neurope_df = pd.DataFrame.from_dict(europe)\n\nprint(type(europe_df))\n\n# Running this cell as is provides the fancy formatting of Pandas which can prove useful.\n\neurope_df",
"Run the previous block now. Here we can see how our list of dictionaries was converted to a DataFrame. Each dictionary became a row, each key became a column and the values became the data inside the object.\nThat's all on Pandas for now! For a quick tutorial on using Pandas you can check this link out. We'll come back to this in the future, we just have to look at Object Oriented Programming and Classes first!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ramhiser/Keras-Tutorials | notebooks/06_autoencoder.ipynb | mit | [
"Autoencoders\nI've been exploring how useful autoencoders are and how painfully simple they are to implement in Keras. In this post, my goal is to better understand them myself, so I borrow heavily from the Keras blog on the same topic. So rather than sprinkling references to the Keras blog throughout the post, just assume I borrowed it from Francois Chollet. Thanks to Francois for making his code available!\nFor instance, I thought about drawing a diagram overviewing autoencoders, but it's hard to beat the effective simplicity of this diagram.\n\nSo, autoencoders are legit. They perform data compression but not in the JPEG or MPEG way, which make some broad assumptions about images, sound, and video and apply compression based on the assumptions. Instead, autoencoders learn (automatically) a lossy compression based on the data examples fed in. So the compression is specific to those examples.\nWhat's Required\nAutoencoders require 3 things:\n\nEncoding function\nDecoding function\nLoss function describing the amount of information loss between the compressed and decompressed representations of the data examples and the decompressed representation (i.e. a \"loss\" function).\n\nThe encoding/decoding functions are typically (parametric) neural nets and are differentiable with respect to the distance function. The differentiable part enables optimizing the parameters of the encoding/decoding functions to minimize the reconstruction loss.\nWhat Are They Good For\n\nData Denoising\nDimension Reduction\nData Visualization (basically the same as 2, but plots)\n\nFor data denoising, think PCA, but nonlinear. In fact, if the encoder/decoder functions are linear, the result spans the space of the PCA solution. The nonlinear part is useful because they can capture, for example, multimodality in the feature space, which PCA can't.\nDimension reduction is a direct result of the lossy compression of the algorithm. It can help with denoising and pre-training before building another ML algorithm. But is the compression good enough to replace JPEG or MPEG? Possibly. Check out this post based on a recent paper.\nBut this post is not about the cutting edge stuff. Instead, we're going to focus on more of the basics and do the following:\n\nSimple Autoencoder\nDeep Autoencoder\nConvolution Autoencoder\nBuild a Second Convolution Autoencoder to Denoise Images\n\nData Loading and Preprocessing\nFor this post, I'm going to use the MNIST data set. To get started, let's start with the boilerplate imports.",
"from IPython.display import Image, SVG\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nimport numpy as np\nimport keras\nfrom keras.datasets import mnist\nfrom keras.models import Model, Sequential\nfrom keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Flatten, Reshape\nfrom keras import regularizers",
"With that out of the way, let's load the MNIST data set and scale the images to a range between 0 and 1. If you haven't already downloaded the data set, the Keras load_data function will download the data directly from S3 on AWS.",
"# Loads the training and test data sets (ignoring class labels)\n(x_train, _), (x_test, _) = mnist.load_data()\n\n# Scales the training and test data to range between 0 and 1.\nmax_value = float(x_train.max())\nx_train = x_train.astype('float32') / max_value\nx_test = x_test.astype('float32') / max_value",
"The data set consists 3D arrays with 60K training and 10K test images. The images have a resolution of 28 x 28 (pixels).",
"x_train.shape, x_test.shape",
"To work with the images as vectors, let's reshape the 3D arrays as matrices. In doing so, we'll reshape the 28 x 28 images into vectors of length 784",
"x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))\nx_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))\n\n(x_train.shape, x_test.shape)",
"Simple Autoencoder\nLet's start with a simple autoencoder for illustration. The encoder and decoder functions are each fully-connected neural layers. The encoder function uses a ReLU activation function, while the decoder function uses a sigmoid activation function.\nSo what are the encoder and the decoder layers doing?\n\nThe encoder layer \"encodes\" the input image as a compressed representation in a reduced dimension. The compressed image typically looks garbled, nothing like the original image.\nThe decoder layer \"decodes\" the encoded image back to the original dimension. The decoded image is a lossy reconstruction of the original image.\n\nIn our example, the compressed image has a dimension of 32. The encoder model reduces the dimension from the original 784-dimensional vector to the encoded 32-dimensional vector. The decoder model restores the dimension from the encoded 32-dimensional representation back to the original 784-dimensional vector.\nThe compression factor is the ratio of the input dimension to the encoded dimension. In our case, the factor is 24.5 = 784 / 32.\nThe autoencoder model maps an input image to its reconstructed image.",
"# input dimension = 784\ninput_dim = x_train.shape[1]\nencoding_dim = 32\n\ncompression_factor = float(input_dim) / encoding_dim\nprint(\"Compression factor: %s\" % compression_factor)\n\nautoencoder = Sequential()\nautoencoder.add(\n Dense(encoding_dim, input_shape=(input_dim,), activation='relu')\n)\nautoencoder.add(\n Dense(input_dim, activation='sigmoid')\n)\n\nautoencoder.summary()",
"Encoder Model\nWe can extract the encoder model from the first layer of the autoencoder model. The reason we want to extract the encoder model is to examine what an encoded image looks like.",
"input_img = Input(shape=(input_dim,))\nencoder_layer = autoencoder.layers[0]\nencoder = Model(input_img, encoder_layer(input_img))\n\nencoder.summary()",
"Okay, now we're ready to train our first autoencoder. We'll iterate on the training data in batches of 256 in 50 epochs. Let's also use the Adam optimizer and per-pixel binary crossentropy loss. The purpose of the loss function is to reconstruct an image similar to the input image.\nI want to call out something that may look like a typo or may not be obvious at first glance. Notice the repeat of x_train in autoencoder.fit(x_train, x_train, ...). This implies that x_train is both the input and output, which is exactly what we want for image reconstruction.\nI'm running this code on a laptop, so you'll notice the training times are a bit slow (no GPU).",
"autoencoder.compile(optimizer='adam', loss='binary_crossentropy')\nautoencoder.fit(x_train, x_train,\n epochs=50,\n batch_size=256,\n shuffle=True,\n validation_data=(x_test, x_test))",
"We've successfully trained our first autoencoder. With a mere 50,992 parameters, our autoencoder model can compress an MNIST digit down to 32 floating-point digits. Not that impressive, but it works.\nTo check out the encoded images and the reconstructed image quality, we randomly sample 10 test images. I really like how the encoded images look. Do they make sense? No. Are they eye candy though? Most definitely.\nHowever, the reconstructed images are quite lossy. You can see the digits clearly, but notice the loss in image quality.",
"num_images = 10\nnp.random.seed(42)\nrandom_test_images = np.random.randint(x_test.shape[0], size=num_images)\n\nencoded_imgs = encoder.predict(x_test)\ndecoded_imgs = autoencoder.predict(x_test)\n\nplt.figure(figsize=(18, 4))\n\nfor i, image_idx in enumerate(random_test_images):\n # plot original image\n ax = plt.subplot(3, num_images, i + 1)\n plt.imshow(x_test[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n \n # plot encoded image\n ax = plt.subplot(3, num_images, num_images + i + 1)\n plt.imshow(encoded_imgs[image_idx].reshape(8, 4))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n # plot reconstructed image\n ax = plt.subplot(3, num_images, 2*num_images + i + 1)\n plt.imshow(decoded_imgs[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()",
"Deep Autoencoder\nAbove, we used single fully-connected layers for both the encoding and decoding models. Instead, we can stack multiple fully-connected layers to make each of the encoder and decoder functions deep. You know because deep learning.\nIn this next model, we'll use 3 fully-connected layers for the encoding model with decreasing dimensions from 128 to 64 32 again. Likewise, we'll add 3 fully-connected decoder layers that reconstruct the image back to 784 dimensions. Except for the last layer, we'll use ReLU activation functions again.\nIn Keras, this model is painfully simple to do, so let's get started. We'll use the same training configuration: Adam + 50 epochs + batch size of 256.",
"autoencoder = Sequential()\n\n# Encoder Layers\nautoencoder.add(Dense(4 * encoding_dim, input_shape=(input_dim,), activation='relu'))\nautoencoder.add(Dense(2 * encoding_dim, activation='relu'))\nautoencoder.add(Dense(encoding_dim, activation='relu'))\n\n# Decoder Layers\nautoencoder.add(Dense(2 * encoding_dim, activation='relu'))\nautoencoder.add(Dense(4 * encoding_dim, activation='relu'))\nautoencoder.add(Dense(input_dim, activation='sigmoid'))\n\nautoencoder.summary()",
"Encoder Model\nLike we did above, we can extract the encoder model from the autoencoder. The encoder model consists of the first 3 layers in the autoencoder, so let's extract them to visualize the encoded images.",
"input_img = Input(shape=(input_dim,))\nencoder_layer1 = autoencoder.layers[0]\nencoder_layer2 = autoencoder.layers[1]\nencoder_layer3 = autoencoder.layers[2]\nencoder = Model(input_img, encoder_layer3(encoder_layer2(encoder_layer1(input_img))))\n\nencoder.summary()\n\nautoencoder.compile(optimizer='adam', loss='binary_crossentropy')\nautoencoder.fit(x_train, x_train,\n epochs=50,\n batch_size=256,\n validation_data=(x_test, x_test))",
"As with the simple autoencoder, we randomly sample 10 test images (the same ones as before). The reconstructed digits look much better than those from the single-layer autoencoder. This observation aligns with the reduction in validation loss after adding multiple layers to the autoencoder.",
"num_images = 10\nnp.random.seed(42)\nrandom_test_images = np.random.randint(x_test.shape[0], size=num_images)\n\nencoded_imgs = encoder.predict(x_test)\ndecoded_imgs = autoencoder.predict(x_test)\n\nplt.figure(figsize=(18, 4))\n\nfor i, image_idx in enumerate(random_test_images):\n # plot original image\n ax = plt.subplot(3, num_images, i + 1)\n plt.imshow(x_test[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n \n # plot encoded image\n ax = plt.subplot(3, num_images, num_images + i + 1)\n plt.imshow(encoded_imgs[image_idx].reshape(8, 4))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n # plot reconstructed image\n ax = plt.subplot(3, num_images, 2*num_images + i + 1)\n plt.imshow(decoded_imgs[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()",
"Convolutional Autoencoder\nNow that we've explored deep autoencoders, let's use a convolutional autoencoder instead, given that the input objects are images. What this means is our encoding and decoding models will be convolutional neural networks instead of fully-connected networks.\nAgain, Keras makes this very easy for us. Before we get started though, we need to reshapes the images back to 28 x 28 x 1 for the convnets. The 1 is for 1 channel because black and white. If we had RGB color, there would be 3 channels.",
"x_train = x_train.reshape((len(x_train), 28, 28, 1))\nx_test = x_test.reshape((len(x_test), 28, 28, 1))",
"To build the convolutional autoencoder, we'll make use of Conv2D and MaxPooling2D layers for the encoder and Conv2D and UpSampling2D layers for the decoder. The encoded images are transformed to a 3D array of dimensions 4 x 4 x 8, but to visualize the encoding, we'll flatten it to a vector of length 128. I tried to use an encoding dimension of 32 like above, but I kept getting subpar results.\nAfter the flattening layer, we reshape the image back to a 4 x 4 x 8 array before upsampling back to a 28 x 28 x 1 image.",
"autoencoder = Sequential()\n\n# Encoder Layers\nautoencoder.add(Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:]))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\nautoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same'))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\nautoencoder.add(Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same'))\n\n# Flatten encoding for visualization\nautoencoder.add(Flatten())\nautoencoder.add(Reshape((4, 4, 8)))\n\n# Decoder Layers\nautoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(16, (3, 3), activation='relu'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(1, (3, 3), activation='sigmoid', padding='same'))\n\nautoencoder.summary()",
"Encoder Model\nTo extract the encoder model for the autoencoder, we're going to use a slightly different approach than before. Rather than extracting the first 6 layers, we're going to create a new Model with the same input as the autoencoder, but the output will be that of the flattening layer. As a side note, this is a very useful technique for grabbing submodels for things like transfer learning.\nAs I mentioned before, the encoded image is a vector of length 128.",
"encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer('flatten_1').output)\nencoder.summary()\n\nautoencoder.compile(optimizer='adam', loss='binary_crossentropy')\nautoencoder.fit(x_train, x_train,\n epochs=100,\n batch_size=128,\n validation_data=(x_test, x_test))",
"The reconstructed digits look even better than before. This is no surprise given an even lower validation loss. Other than slight improved reconstruction, check out how the encoded image has changed. What's even cooler is that the encoded images of the 9 look similar as do those of the 8's. This similarity was far less pronounced for the simple and deep autoencoders.",
"num_images = 10\nnp.random.seed(42)\nrandom_test_images = np.random.randint(x_test.shape[0], size=num_images)\n\nencoded_imgs = encoder.predict(x_test)\ndecoded_imgs = autoencoder.predict(x_test)\n\nplt.figure(figsize=(18, 4))\n\nfor i, image_idx in enumerate(random_test_images):\n # plot original image\n ax = plt.subplot(3, num_images, i + 1)\n plt.imshow(x_test[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n \n # plot encoded image\n ax = plt.subplot(3, num_images, num_images + i + 1)\n plt.imshow(encoded_imgs[image_idx].reshape(16, 8))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n # plot reconstructed image\n ax = plt.subplot(3, num_images, 2*num_images + i + 1)\n plt.imshow(decoded_imgs[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()",
"Denoising Images with the Convolutional Autoencoder\nEarlier, I mentioned that autoencoders are useful for denoising data including images. When I learned about this concept in grad school, my mind was blown. This simple task helped me realize data can be manipulated in very useful ways and that the dirty data we often inherit can be cleansed using more advanced techniques.\nWith that in mind, let's add bit of noise to the test images and see how good the convolutional autoencoder is at removing the noise.",
"x_train_noisy = x_train + np.random.normal(loc=0.0, scale=0.5, size=x_train.shape)\nx_train_noisy = np.clip(x_train_noisy, 0., 1.)\n\nx_test_noisy = x_test + np.random.normal(loc=0.0, scale=0.5, size=x_test.shape)\nx_test_noisy = np.clip(x_test_noisy, 0., 1.)\n\nnum_images = 10\nnp.random.seed(42)\nrandom_test_images = np.random.randint(x_test.shape[0], size=num_images)\n\n# Denoise test images\nx_test_denoised = autoencoder.predict(x_test_noisy)\n\nplt.figure(figsize=(18, 4))\n\nfor i, image_idx in enumerate(random_test_images):\n # plot original image\n ax = plt.subplot(2, num_images, i + 1)\n plt.imshow(x_test_noisy[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n \n # plot reconstructed image\n ax = plt.subplot(2, num_images, num_images + i + 1)\n plt.imshow(x_test_denoised[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()",
"Convolutional Autoencoder - Take 2\nWell, those images are terrible. They remind me of the mask from the movie Scream.\n\nOkay, so let's try that again. This time we're going to build a ConvNet with a lot more parameters and forego visualizing the encoding layer. The network will be a bit larger and slower to train, but the results are definitely worth the effort.\nOne more thing: this time, let's use (x_train_noisy, x_train) as training data and (x_test_noisy, x_test) as validation data.",
"autoencoder = Sequential()\n\n# Encoder Layers\nautoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:]))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\nautoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same'))\nautoencoder.add(MaxPooling2D((2, 2), padding='same'))\n\n# Decoder Layers\nautoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same'))\nautoencoder.add(UpSampling2D((2, 2)))\nautoencoder.add(Conv2D(1, (3, 3), activation='sigmoid', padding='same'))\n\nautoencoder.summary()\n\nautoencoder.compile(optimizer='adam', loss='binary_crossentropy')\nautoencoder.fit(x_train_noisy, x_train,\n epochs=100,\n batch_size=128,\n validation_data=(x_test_noisy, x_test))\n\n# Denoise test images\nx_test_denoised = autoencoder.predict(x_test_noisy)\n\nplt.figure(figsize=(18, 4))\n\nfor i, image_idx in enumerate(random_test_images):\n # plot original image\n ax = plt.subplot(2, num_images, i + 1)\n plt.imshow(x_test_noisy[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n \n # plot reconstructed image\n ax = plt.subplot(2, num_images, num_images + i + 1)\n plt.imshow(x_test_denoised[image_idx].reshape(28, 28))\n plt.gray()\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nplt.show()",
"Fantastic, those images almost look like the originals."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
the-deep-learners/study-group | neural-networks-and-deep-learning/src/run_network.ipynb | mit | [
"Network from Nielsen's Chapter 1\nhttp://neuralnetworksanddeeplearning.com/chap1.html#implementing_our_network_to_classify_digits\nLoad MNIST Data",
"import mnist_loader\n\ntraining_data, validation_data, test_data = mnist_loader.load_data_wrapper()",
"Set up Network",
"import network\n\n# 784 (28 x 28 pixel images) input neurons; 30 hidden neurons; 10 output neurons\nnet = network.Network([784, 30, 10])",
"Train Network",
"# Use stochastic gradient descent over 30 epochs, with mini-batch size of 10, learning rate of 3.0\nnet.SGD(training_data, 30, 10, 3.0, test_data=test_data)",
"Exercise: Create network with just two layers",
"two_layer_net = network.Network([784, 10])\n\ntwo_layer_net.SGD(training_data, 10, 10, 1.0, test_data=test_data)\n\ntwo_layer_net.SGD(training_data, 10, 10, 2.0, test_data=test_data)\n\ntwo_layer_net.SGD(training_data, 10, 10, 3.0, test_data=test_data)\n\ntwo_layer_net.SGD(training_data, 10, 10, 4.0, test_data=test_data)\n\ntwo_layer_net.SGD(training_data, 20, 10, 3.0, test_data=test_data)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io | v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb | bsd-3-clause | [
"statsmodels Principal Component Analysis\nKey ideas: Principal component analysis, world bank data, fertility\nIn this notebook, we use principal components analysis (PCA) to analyze the time series of fertility rates in 192 countries, using data obtained from the World Bank. The main goal is to understand how the trends in fertility over time differ from country to country. This is a slightly atypical illustration of PCA because the data are time series. Methods such as functional PCA have been developed for this setting, but since the fertility data are very smooth, there is no real disadvantage to using standard PCA in this case.",
"%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport statsmodels.api as sm\nfrom statsmodels.multivariate.pca import PCA\n\nplt.rc(\"figure\", figsize=(16, 8))\nplt.rc(\"font\", size=14)",
"The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data:",
"data = sm.datasets.fertility.load_pandas().data\ndata.head()",
"Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data.",
"columns = list(map(str, range(1960, 2012)))\ndata.set_index(\"Country Name\", inplace=True)\ndta = data[columns]\ndta = dta.dropna()\ndta.head()",
"There are two ways to use PCA to analyze a rectangular matrix: we can treat the rows as the \"objects\" and the columns as the \"variables\", or vice-versa. Here we will treat the fertility measures as \"variables\" used to measure the countries as \"objects\". Thus the goal will be to reduce the yearly fertility rate values to a small number of fertility rate \"profiles\" or \"basis functions\" that capture most of the variation over time in the different countries.\nThe mean trend is removed in PCA, but its worthwhile taking a look at it. It shows that fertility has dropped steadily over the time period covered in this dataset. Note that the mean is calculated using a country as the unit of analysis, ignoring population size. This is also true for the PC analysis conducted below. A more sophisticated analysis might weight the countries, say by population in 1980.",
"ax = dta.mean().plot(grid=False)\nax.set_xlabel(\"Year\", size=17)\nax.set_ylabel(\"Fertility rate\", size=17)\nax.set_xlim(0, 51)",
"Next we perform the PCA:",
"pca_model = PCA(dta.T, standardize=False, demean=True)",
"Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC's.",
"fig = pca_model.plot_scree(log_scale=False)",
"Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor is U-shaped with a positive peak at around 1985. Countries with a large positive score on the second factor will have lower than average fertilities at the beginning and end of the data range, but higher than average fertility in the middle of the range.",
"fig, ax = plt.subplots(figsize=(8, 4))\nlines = ax.plot(pca_model.factors.iloc[:, :3], lw=4, alpha=0.6)\nax.set_xticklabels(dta.columns.values[::10])\nax.set_xlim(0, 51)\nax.set_xlabel(\"Year\", size=17)\nfig.subplots_adjust(0.1, 0.1, 0.85, 0.9)\nlegend = fig.legend(lines, [\"PC 1\", \"PC 2\", \"PC 3\"], loc=\"center right\")\nlegend.draw_frame(False)",
"To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot.",
"idx = pca_model.loadings.iloc[:, 0].argsort()",
"First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing).",
"def make_plot(labels):\n fig, ax = plt.subplots(figsize=(9, 5))\n ax = dta.loc[labels].T.plot(legend=False, grid=False, ax=ax)\n dta.mean().plot(ax=ax, grid=False, label=\"Mean\")\n ax.set_xlim(0, 51)\n fig.subplots_adjust(0.1, 0.1, 0.75, 0.9)\n ax.set_xlabel(\"Year\", size=17)\n ax.set_ylabel(\"Fertility\", size=17)\n legend = ax.legend(\n *ax.get_legend_handles_labels(), loc=\"center left\", bbox_to_anchor=(1, 0.5)\n )\n legend.draw_frame(False)\n\nlabels = dta.index[idx[-5:]]\nmake_plot(labels)",
"Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility.",
"idx = pca_model.loadings.iloc[:, 1].argsort()\nmake_plot(dta.index[idx[-5:]])",
"Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960's and 1970's, then flattened out.",
"make_plot(dta.index[idx[:5]])",
"We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp spike in fertility around 1980. No other country has such a spike. In contrast, the countries with high scores on PC 1 (that have continuously increasing fertility), are part of a continuum of variation.",
"fig, ax = plt.subplots()\npca_model.loadings.plot.scatter(x=\"comp_00\", y=\"comp_01\", ax=ax)\nax.set_xlabel(\"PC 1\", size=17)\nax.set_ylabel(\"PC 2\", size=17)\ndta.index[pca_model.loadings.iloc[:, 1] > 0.2].values"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mdeff/ntds_2017 | projects/reports/face_manifold/NTDS_Project.ipynb | mit | [
"Manifold Learning on Face Data\nAtul Kumar Sinha, Karttikeya Mangalam and Prakhar Srivastava\nIn this project, we explore manifold learning on face data to embed high dimensional face images into a lower dimensional embedding. We hypothesize that euclidean distance in this lower dimensional embedding reflects image similarity in a better way. This hypothesis is tested by choosing path(s) that contain a number of points (images) from this lower dimensional space which represent an ordererd set of images. These images are then combined to generate a video which shows a smooth morphing.",
"import os\nimport numpy as np\nfrom sklearn.tree import ExtraTreeRegressor\nfrom sklearn import manifold\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\nfrom matplotlib import animation\nfrom PIL import Image\nimport pickle\nfrom scipy.linalg import norm\n\nimport networkx as nx\nfrom scipy import spatial\n\n\nfrom bokeh.plotting import figure, output_file, show, ColumnDataSource\nfrom bokeh.models import HoverTool\nfrom bokeh.io import output_notebook\noutput_notebook()\n\n%matplotlib inline\n\nplt.rcParams[\"figure.figsize\"] = (8,6)\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nPATH = './img_align_celeba'\n\ndef load_image(filepath):\n ''' Loads an image at the path specified by the parameter filepath '''\n im = Image.open(filepath)\n return im\n\ndef show_image(im):\n ''' Displays an image'''\n fig1, ax1 = plt.subplots(1, 1)\n ax1.imshow(im, cmap='gray');\n return\n\n#Loads image files from all sub-directories\n\nimgfiles = [os.path.join(root, name)\n for root, dirs, files in os.walk(PATH)\n for name in files\n if name.endswith((\".jpg\"))]",
"Dataset\nWe are using CelebA Dataset which is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter.\nWe randomly downsample it by a factor of 30 for computational reasons.",
"#N=int(len(imgfiles)/30)\nN=len(imgfiles)\nprint(\"Number of images = {}\".format(N))\ntest = imgfiles[0:N]\n\ntest[1]",
"Loading the data",
"sample_path = imgfiles[0]\nsample_im = load_image(sample_path)\nsample_im = np.array(sample_im)\nimg_shape = (sample_im.shape[0],sample_im.shape[1])\n\nims = np.zeros((N, sample_im.shape[1]*sample_im.shape[0]))\nfor i, filepath in enumerate(test):\n im = load_image(filepath)\n im = np.array(im)\n im = im.mean(axis=2)\n im = np.asarray(im).ravel().astype(float)\n ims[i] = im",
"Learning the Manifold\nWe are using Isomap for dimensionality reduction as we believe that the face image data lies on a structured manifold in a higher dimension and thus is embeddable in a much lower dimension without much loss of information.\nFurther, Isomap is a graph based technique which aligns with our scope.",
"#iso = manifold.Isomap(n_neighbors=2, n_components=3, max_iter=500, n_jobs=-1)\n\n#Z = iso.fit_transform(ims) #don't run, can load from pickle as in below cells\n\n#saving the learnt embedding\n\n#with open('var6753_n2_d3.pkl', 'wb') as f: #model learnt with n_neighbors=2 and n_components=3\n# pickle.dump(Z,f)\n\n#with open('var6753_n2_d2.pkl', 'wb') as f: #model learnt with n_neighbors=2 and n_components=2\n# pickle.dump(Z,f)\n\n#with open('var6753_n4_d3.pkl', 'wb') as f: #model learnt with n_neighbors=4 and n_components=3\n# pickle.dump(Z,f)\n\nwith open('var6753_n2_d2.pkl', 'rb') as f:\n Z = pickle.load(f)\n\n#Visualizing the learnt 3D-manifold in two dimensions\n\nsource = ColumnDataSource(\n data=dict(\n x=Z[:, 0],\n y=Z[:, 1],\n desc=list(range(Z.shape[0])),\n )\n )\n\nhover = HoverTool(\n tooltips=[\n (\"index\", \"$index\"),\n (\"(x,y)\", \"($x, $y)\"),\n (\"desc\", \"@desc\"),\n ]\n )\n\np = figure(plot_width=700, plot_height=700, tools=[hover],title=\"Mouse over the dots\")\n\np.circle('x', 'y', size=10, source=source)\nshow(p)",
"Regeneration from Lower Dimensional Space\nWhile traversing the chosen path, we are also sub sampling in the lower dimensional space in order to create smooth transitions in the video. We naturally expect smoothness as points closer in the lower dimensional space should correspond to similar images. Since we do not have an exact representation for these sub-sampled points in the original image space, we need a method to map these back to the higher dimension.\nWe will be using Extremely randomized trees for regression.\nAs an alternative, we would also be testing convex combination approach to generate representations for the sub-sampled points.\nPath Selection Heuristic\nMethod 1\nGenerating k-nearest graph using the Gaussian kernel. We further generate all pair shortest paths from this graph and randomly choose any path from that list for visualization. For regeneration of sub-sampled points, we use Extremely randomized trees as mentioned above.",
"#Mapping the regressor from low dimension space to high dimension space\n\nlin = ExtraTreeRegressor(max_depth=19)\nlin.fit(Z, ims)\n\nlin.score(Z, ims)\n\npred = lin.predict(Z[502].reshape(1, -1));\nfig_new, [ax1,ax2] = plt.subplots(1,2)\nax1.imshow(ims[502].reshape(*img_shape), cmap = 'gray')\nax1.set_title('Original')\nax2.imshow(pred.reshape(*img_shape), cmap = 'gray')\nax2.set_title('Reconstructed')\n\nperson1 = 34\nperson2 = 35\ntest = ((Z[person1] + Z[person2]) / 2) #+ 0.5*np.random.randn(*Z[person1].shape)\npred = lin.predict(test.reshape(1, -1))\nfig_newer, [ax1, ax2, ax3] = plt.subplots(1, 3)\nax1.imshow(ims[person1].reshape(*img_shape), cmap = 'gray')\nax1.set_title('Face 1')\nax2.imshow(ims[person2].reshape(*img_shape), cmap = 'gray')\nax2.set_title('Face 2')\nax3.imshow(pred.reshape(*img_shape), cmap = 'gray')\nax3.set_title('Face between lying on manifold');\n\ndistances = spatial.distance.squareform(spatial.distance.pdist(Z, 'braycurtis'))\n\nkernel_width = distances.mean()\nweights = np.exp(-np.square(distances) / (kernel_width ** 0.1))\nfor i in range(weights.shape[0]):\n weights[i][i] = 0\n\nNEIGHBORS = 2\n#NEIGHBORS = 100\n# Your code here.\n\n#Find sorted indices of weights for each row\nindices = np.argsort(weights, axis = 1)\n\n#Create a zero matrix which would later be filled with sparse weights\nn_weights = np.zeros((weights.shape[0], weights.shape[1]))\n\n#Loop that iterates over the 'K' strongest weights in each row, and assigns them to sparse matrix, leaving others zero\nfor i in range(indices.shape[0]):\n for j in range(indices.shape[1] - NEIGHBORS, indices.shape[1]):\n col = indices[i][j]\n n_weights[i][col] = weights[i][col] \n\n#Imposing symmetricity\nbig = n_weights.T > n_weights\nn_weights_s = n_weights - n_weights * big + n_weights.T * big\n\nG = nx.from_numpy_matrix(n_weights_s)\n\npos = {}\nfor i in range(Z.shape[0]):\n pos[i] = Z[i,0:2]\n\nfig2,ax2 = plt.subplots()\nnx.draw(G, pos, ax=ax2, node_size=10)\n\nimlist=nx.all_pairs_dijkstra_path(G)[0][102] #choosing the path starting at node 0 and ending at node 102\nimlist\n\nN=25 #number of sub-samples between each consecutive pair in the path\nlbd = np.linspace(0, 1, N)\ncounter = 0\nfor count, i in enumerate(imlist):\n if count != len(imlist) - 1:\n person1 = i\n person2 = imlist[count + 1]\n for j in range(N):\n test = (lbd[j] * Z[person2]) + ((1 - lbd[j]) * Z[person1])\n pred = lin.predict(test.reshape(1, -1))\n im = Image.fromarray(pred.reshape(*img_shape))\n im = im.convert('RGB')\n im.save('{}.png'.format(counter))\n counter += 1\n\nos.system(\"ffmpeg -f image2 -r 10 -i ./%d.png -vcodec mpeg4 -y ./method1.mp4\")",
"Please check the generated video in the same enclosing folder.\nObserving the output of the tree regressor we notice sudden jumps in the reconstructed video. We suspect that these discontinuities are either an artefact of the isomap embedding in a much lower dimension or because of the reconstruction method. \nTo investigate further we plot the frobenius norm of the sampled image in the isomap domain and that of the reconstructed image in the original domain. Since, we are sampling on a linear line between two images, the plot of the norm of the image of expected to be either an increasing or a decreasing linear graph. This indeed turnout the case for the sampled images in the isomap domain.\nHowever, as we suspected, after reconstruction we observed sudden jumps in the plot. Clearly, this is because of the tree regressor which is overfitting the data, in which case there are sudden jumps in the plot.",
"norm_vary = list()\nnorm_im = list()\nlbd = np.linspace(0, 1, 101)\nperson1=12\nperson2=14\nfor i in range(101):\n test = (lbd[i] * Z[person2]) + ((1-lbd[i]) * Z[person1])\n norm_vary.append(norm(test))\n pred = lin.predict(test.reshape(1, -1))\n im = Image.fromarray(pred.reshape(*img_shape))\n norm_im.append(norm(im))\n\nf, ax = plt.subplots(1,1)\nax.plot(norm_vary)\nax.set_title('Norm for the mean image in projected space')\n\nnorm_vary = list()\nnorm_im = list()\nlbd = np.linspace(0, 1, 101)\nfor i in range(101):\n test = (lbd[i] * Z[person1]) + ((1-lbd[i]) * Z[person2])\n norm_vary.append(norm(test))\n pred = lin.predict(test.reshape(1, -1))\n im = Image.fromarray(pred.reshape(*img_shape))\n norm_im.append(norm(im))\nf, ax = plt.subplots(1,1)\nax.plot(norm_im)\nax.set_title('Norm for mean image in original space')",
"Even after extensive hyperparamter tuning, we are unable to learn a reasonable regressor hence we use the convex combination approach in him dim.\nMethod 2\nInstead of choosing a path from the graph, manually choosing a set of points which visibbly lie on a 2D manifold. For regeneration of sub-sampled points, we use convex combinations of the consecutive pairs in high dimensional space itself.",
"\n#Interesting paths with N4D3 model\n#imlist = [1912,3961,2861,4870,146,6648]\n#imlist = [3182,5012,5084,1113,2333,1375]\n#imlist = [5105,5874,4255,2069,1178]\n#imlist = [3583,2134,1034, 3917,3704, 5920,6493]\n#imlist = [1678,6535,6699,344,6677,5115,6433]\n\n#Interesting paths with N2D3 model\nimlist = [1959,3432,6709,4103, 4850,6231,4418,4324]\n#imlist = [369,2749,1542,366,1436,2836]\n\n#Interesting paths with N2D2 model\n#imlist = [2617,4574,4939,5682,1917,3599,6324,1927]\n\nN=25\nlbd = np.linspace(0, 1, N)\ncounter = 0\nfor count, i in enumerate(imlist):\n if count != len(imlist) - 1:\n person1 = i\n person2 = imlist[count + 1]\n for j in range(N):\n im = (lbd[j] * ims[person2]) + ((1 - lbd[j]) * ims[person1])\n im = Image.fromarray(im.reshape((218, 178)))\n im = im.convert('RGB')\n im.save('{}.png'.format(counter))\n counter += 1\n\nos.system(\"ffmpeg -f image2 -r 10 -i ./%d.png -vcodec mpeg4 -y ./method2.mp4\")",
"Please check the generated video in the same enclosing folder.\nNow we can obbserve that the video has quite smooth transitions in terms of either similar head poses, hair styles or face shapes, etc."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dietmarw/EK5312_ElectricalMachines | Chapman/Ch5-Problem_5-10.ipynb | unlicense | [
"Excercises Electric Machinery Fundamentals\nChapter 5\nProblem 5-10",
"%pylab notebook",
"Description\nA synchronous machine has a synchronous reactance of $1.0\\,\\Omega$ per phase and an armature resistance of $0.1\\,\\Omega$ per phase. \n\nIf $\\vec{E}A = 460\\,V\\angle-10°$ and $\\vec{V}\\phi = 480\\,V\\angle0°$, is this machine a motor or a generator? \nHow much power P is this machine consuming from or supplying to the electrical system?\nHow much reactive power Q is this machine consuming from or supplying to the electrical system?",
"Ea = 460 # [V]\nEA_angle = -10/180*pi # [rad]\nEA = Ea * (cos(EA_angle) + 1j*sin(EA_angle))\nVphi = 480 # [V]\nVPhi_angle = 0/180*pi # [rad]\nVPhi = Vphi*exp(1j*VPhi_angle)\nRa = 0.1 # [Ohm]\nXs = 1.0 # [Ohm]",
"SOLUTION\nThis machine is a motor, consuming power from the power system, because $\\vec{E}A$ is lagging $\\vec{V}\\phi$\nIt is also consuming reactive power, because $E_A \\cos{\\delta} < V_\\phi$ . The current flowing in this machine is:\n$$\\vec{I}A = \\frac{\\vec{V}\\phi - \\vec{E}_A}{R_A + jX_s}$$",
"IA = (VPhi - EA) / (Ra + Xs*1j)\nIA_angle = arctan(IA.imag/IA.real)\nprint('IA = {:.1f} A ∠ {:.2f}°'.format(abs(IA), IA_angle/pi*180))",
"Therefore the real power consumed by this motor is:\n$$P =3V_\\phi I_A \\cos{\\theta}$$",
"theta = abs(IA_angle)\nP = 3* abs(VPhi)* abs(IA)* cos(theta)\nprint('''\nP = {:.1f} kW\n============'''.format(P/1e3))",
"and the reactive power consumed by this motor is:\n$$Q = 3V_\\phi I_A \\sin{\\theta}$$",
"Q = 3* abs(VPhi)* abs(IA)* sin(theta)\nprint('''\nQ = {:.1f} kvar\n============='''.format(Q/1e3))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ramabrahma/data-sci-int-capstone | .ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb | gpl-3.0 | [
"Exploration of Prudential Life Insurance Data\nData retrieved from:\nhttps://www.kaggle.com/c/prudential-life-insurance-assessment\nFile descriptions:\n\ntrain.csv - the training set, contains the Response values\ntest.csv - the test set, you must predict the Response variable for all rows in this file\nsample_submission.csv - a sample submission file in the correct format\n\nData fields:\nVariable | Description\n-------- | ------------\nId | A unique identifier associated with an application.\nProduct_Info_1-7 | A set of normalized variables relating to the product applied for\nIns_Age | Normalized age of applicant\nHt | Normalized height of applicant\nWt | Normalized weight of applicant\nBMI | Normalized BMI of applicant\nEmployment_Info_1-6 | A set of normalized variables relating to the employment history of the applicant.\nInsuredInfo_1-6 | A set of normalized variables providing information about the applicant.\nInsurance_History_1-9 | A set of normalized variables relating to the insurance history of the applicant.\nFamily_Hist_1-5 | A set of normalized variables relating to the family history of the applicant.\nMedical_History_1-41 | A set of normalized variables relating to the medical history of the applicant.\nMedical_Keyword_1-48 | A set of dummy variables relating to the presence of/absence of a medical keyword being associated with the application.\nResponse | This is the target variable, an ordinal variable relating to the final decision associated with an application\nThe following variables are all categorical (nominal):\nProduct_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41\nThe following variables are continuous:\nProduct_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5\nThe following variables are discrete:\nMedical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32\nMedical_Keyword_1-48 are dummy variables.\nMy thoughts are as follows:\nThe main dependent variable is the Risk Response (1-8)\nWhat are variables are correlated to the risk response?\nHow do I perform correlation analysis between variables?\nImport libraries",
"# Importing libraries\n\n%pylab inline\n%matplotlib inline\nimport pandas as pd \nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import LogNorm\nfrom sklearn import preprocessing\nimport numpy as np\n\n# Convert variable data into categorical, continuous, discrete, \n# and dummy variable lists the following into a dictionary",
"Define categorical data types",
"s = [\"Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41\",\n \"Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5\",\n \"Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32\"]\n \n\nvarTypes = dict()\n\n\n#Very hacky way of inserting and appending ID and Response columns to the required dataframes\n#Make this better\n\nvarTypes['categorical'] = s[0].split(', ')\n#varTypes['categorical'].insert(0, 'Id')\n#varTypes['categorical'].append('Response')\n\nvarTypes['continuous'] = s[1].split(', ')\n#varTypes['continuous'].insert(0, 'Id')\n#varTypes['continuous'].append('Response')\n\nvarTypes['discrete'] = s[2].split(', ')\n#varTypes['discrete'].insert(0, 'Id') \n#varTypes['discrete'].append('Response')\n\n\n\nvarTypes['dummy'] = [\"Medical_Keyword_\"+str(i) for i in range(1,49)]\nvarTypes['dummy'].insert(0, 'Id')\nvarTypes['dummy'].append('Response')\n\n\n\n\n#Prints out each of the the variable types as a check\n#for i in iter(varTypes['dummy']):\n #print i",
"Importing life insurance data set\nThe following variables are all categorical (nominal):\nProduct_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41\nThe following variables are continuous:\nProduct_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5\nThe following variables are discrete:\nMedical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32\nMedical_Keyword_1-48 are dummy variables.",
"#Import training data \nd = pd.read_csv('prud_files/train.csv')\n\ndef normalize_df(d):\n min_max_scaler = preprocessing.MinMaxScaler()\n x = d.values.astype(np.float)\n return pd.DataFrame(min_max_scaler.fit_transform(x))\n\n# Import training data \nd = pd.read_csv('prud_files/train.csv')\n\n#Separation into groups\n\ndf_cat = pd.DataFrame(d, columns=[\"Id\",\"Response\"]+varTypes[\"categorical\"])\ndf_disc = pd.DataFrame(d, columns=[\"Id\",\"Response\"]+varTypes[\"categorical\"])\ndf_cont = pd.DataFrame(d, columns=[\"Id\",\"Response\"]+varTypes[\"categorical\"])\n\nd_cat = df_cat.copy()\n\n#normalizes the columns for binary classification\nnorm_product_info_2 = [pd.get_dummies(d_cat[\"Product_Info_2\"])]\n\na = pd.DataFrame(normalize_df(d_cat[\"Response\"]))\na.columns=[\"nResponse\"]\nd_cat = pd.concat([d_cat, a], axis=1, join='outer')\n\nfor x in varTypes[\"categorical\"]:\n try:\n\n a = pd.DataFrame(normalize_df(d_cat[x]))\n a.columns=[str(\"n\"+x)]\n d_cat = pd.concat([d_cat, a], axis=1, join='outer')\n except Exception as e:\n print e.args\n print \"Error on \"+str(x)+\" w error: \"+str(e)\n\n\nd_cat.iloc[:,62:66].head(5)\n\n# Normalization of columns\n# Create a minimum and maximum processor object\n\n# Define various group by data streams\n\ndf = d\n \ngb_PI2 = df.groupby('Product_Info_1')\ngb_PI2 = df.groupby('Product_Info_2')\n\ngb_Ins_Age = df.groupby('Ins_Age')\ngb_Ht = df.groupby('Ht')\ngb_Wt = df.groupby('Wt')\n\ngb_response = df.groupby('Response')\n\n\n\n#Outputs rows the differnet categorical groups\n\nfor c in df.columns:\n if (c in varTypes['categorical']):\n if(c != 'Id'):\n a = [ str(x)+\", \" for x in df.groupby(c).groups ]\n print c + \" : \" + str(a)\n \n \n\ndf_prod_info = pd.DataFrame(d, columns=([\"Response\"]+ [ \"Product_Info_\"+str(x) for x in range(1,8)])) \ndf_emp_info = pd.DataFrame(d, columns=([\"Response\"]+ [ \"Employment_Info_\"+str(x) for x in range(1,6)])) \ndf_bio = pd.DataFrame(d, columns=[\"Response\", \"Ins_Age\", \"Ht\", \"Wt\",\"BMI\"])\ndf_med_kw = pd.DataFrame(d, columns=([\"Response\"]+ [ \"Medical_Keyword_\"+str(x) for x in range(1,48)])).add(axis=[ \"Medical_Keyword_\"+str(x) for x in range(1,48)])\ndf_med_kw.describe()\n\ndf.head(5)\n\ndf.describe()",
"Grouping of various categorical data sets\nHistograms and descriptive statistics for Risk Response, Ins_Age, BMI, Wt",
"plt.figure(0)\nplt.title(\"Categorical - Histogram for Risk Response\")\nplt.xlabel(\"Risk Response (1-7)\")\nplt.ylabel(\"Frequency\")\nplt.hist(df.Response)\nplt.savefig('images/hist_Response.png')\nprint df.Response.describe()\nprint \"\"\n\n\nplt.figure(1)\nplt.title(\"Continuous - Histogram for Ins_Age\")\nplt.xlabel(\"Normalized Ins_Age [0,1]\")\nplt.ylabel(\"Frequency\")\nplt.hist(df.Ins_Age)\nplt.savefig('images/hist_Ins_Age.png')\nprint df.Ins_Age.describe()\nprint \"\"\n\nplt.figure(2)\nplt.title(\"Continuous - Histogram for BMI\")\nplt.xlabel(\"Normalized BMI [0,1]\")\nplt.ylabel(\"Frequency\")\nplt.hist(df.BMI)\nplt.savefig('images/hist_BMI.png')\nprint df.BMI.describe()\nprint \"\"\n\nplt.figure(3)\nplt.title(\"Continuous - Histogram for Wt\")\nplt.xlabel(\"Normalized Wt [0,1]\")\nplt.ylabel(\"Frequency\")\nplt.hist(df.Wt)\nplt.savefig('images/hist_Wt.png')\nprint df.Wt.describe()\nprint \"\"\n\nplt.show()",
"Histograms and descriptive statistics for Product_Info_1-7",
"for i in range(1,8):\n \n print \"The iteration is: \"+str(i)\n print df['Product_Info_'+str(i)].describe()\n print \"\"\n \n plt.figure(i)\n\n if(i == 4):\n plt.title(\"Continuous - Histogram for Product_Info_\"+str(i))\n plt.xlabel(\"Normalized value: [0,1]\")\n plt.ylabel(\"Frequency\")\n else:\n plt.title(\"Categorical - Histogram of Product_Info_\"+str(i))\n plt.xlabel(\"Categories\")\n plt.ylabel(\"Frequency\")\n \n if(i == 2):\n df.Product_Info_2.value_counts().plot(kind='bar')\n else:\n plt.hist(df['Product_Info_'+str(i)])\n \n plt.savefig('images/hist_Product_Info_'+str(i)+'.png')\n\nplt.show()",
"Split dataframes into categorical, continuous, discrete, dummy, and response",
"catD = df.loc[:,varTypes['categorical']]\ncontD = df.loc[:,varTypes['continuous']]\ndisD = df.loc[:,varTypes['discrete']]\ndummyD = df.loc[:,varTypes['dummy']]\nrespD = df.loc[:,['id','Response']]",
"Descriptive statistics and scatter plot relating Product_Info_2 and Response",
"prod_info = [ \"Product_Info_\"+str(i) for i in range(1,8)]\n\na = catD.loc[:, prod_info[1]]\n\nstats = catD.groupby(prod_info[1]).describe()\n\nc = gb_PI2.Response.count()\nplt.figure(0)\n\nplt.scatter(c[0],c[1])\n\nplt.figure(0)\nplt.title(\"Histogram of \"+\"Product_Info_\"+str(i))\nplt.xlabel(\"Categories \" + str((a.describe())['count']))\nplt.ylabel(\"Frequency\")\n\n\n\nfor i in range(1,8):\n a = catD.loc[:, \"Product_Info_\"+str(i)]\n if(i is not 4):\n print a.describe()\n print \"\"\n \n plt.figure(i)\n plt.title(\"Histogram of \"+\"Product_Info_\"+str(i))\n plt.xlabel(\"Categories \" + str((catD.groupby(key).describe())['count']))\n plt.ylabel(\"Frequency\")\n \n #fig, axes = plt.subplots(nrows = 1, ncols = 2)\n #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title(\"Histogram: \"+str(key))\n #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title(\"Cumulative HG: \"+str(key))\n \n if a.dtype in (np.int64, np.float, float, int):\n a.hist()\n \n# Random functions\n#catD.Product_Info_1.describe()\n#catD.loc[:, prod_info].groupby('Product_Info_2').describe()\n#df[varTypes['categorical']].hist()\n\ncatD.head(5)\n\n#Exploration of the discrete data\ndisD.describe()\n\ndisD.head(5)\n\n#Iterate through each categorical column of data\n#Perform a 2D histogram later\n\ni=0 \nfor key in varTypes['categorical']:\n \n #print \"The category is: {0} with value_counts: {1} and detailed tuple: {2} \".format(key, l.count(), l)\n plt.figure(i)\n plt.title(\"Histogram of \"+str(key))\n plt.xlabel(\"Categories \" + str((df.groupby(key).describe())['count']))\n #fig, axes = plt.subplots(nrows = 1, ncols = 2)\n #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title(\"Histogram: \"+str(key))\n #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title(\"Cumulative HG: \"+str(key))\n if df[key].dtype in (np.int64, np.float, float, int):\n df[key].hist()\n \n i+=1\n\n\n#Iterate through each 'discrete' column of data\n#Perform a 2D histogram later\n\ni=0 \nfor key in varTypes['discrete']:\n \n #print \"The category is: {0} with value_counts: {1} and detailed tuple: {2} \".format(key, l.count(), l)\n plt.figure(i)\n fig, axes = plt.subplots(nrows = 1, ncols = 2)\n \n #Histogram based on normalized value counts of the data set\n disD[key].value_counts().hist(ax=axes[0]); axes[0].set_title(\"Histogram: \"+str(key))\n \n #Cumulative histogram based on normalized value counts of the data set\n disD[key].value_counts().hist(cumulative=True,ax=axes[1]); axes[1].set_title(\"Cumulative HG: \"+str(key))\n i+=1\n\n#2D Histogram\n\ni=0 \nfor key in varTypes['categorical']:\n \n #print \"The category is: {0} with value_counts: {1} and detailed tuple: {2} \".format(key, l.count(), l)\n plt.figure(i)\n #fig, axes = plt.subplots(nrows = 1, ncols = 2)\n \n x = catD[key].value_counts(normalize=True)\n y = df['Response']\n \n plt.hist2d(x[1], y, bins=40, norm=LogNorm())\n plt.colorbar()\n \n #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title(\"Histogram: \"+str(key))\n #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title(\"Cumulative HG: \"+str(key))\n i+=1\n\n#Iterate through each categorical column of data\n#Perform a 2D histogram later\n\ni=0 \nfor key in varTypes['categorical']:\n \n #print \"The category is: {0} with value_counts: {1} and detailed tuple: {2} \".format(key, l.count(), l)\n plt.figure(i)\n #fig, axes = plt.subplots(nrows = 1, ncols = 2)\n #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title(\"Histogram: \"+str(key))\n #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title(\"Cumulative HG: \"+str(key))\n if df[key].dtype in (np.int64, np.float, float, int):\n #(1.*df[key].value_counts()/len(df[key])).hist()\n df[key].value_counts(normalize=True).plot(kind='bar')\n \n i+=1\n\ndf.loc('Product_Info_1')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
g-weatherill/notebooks | gmpe-smtk/Ground Motion IMs Short.ipynb | agpl-3.0 | [
"Calculating Ground Motion Intensity Measures\nThe SMTK contains two modules for the characterisation of ground motion:\n1) smtk.response_spectrum\nThis module contains methods for calculation of the response of a set of single degree-of-freedom (SDOF) oscillators using an input time series. Two methods are currently supported:\ni) Newmark-Beta\n\nii) Nigam & Jennings (1969) {Preferred}\n\nThe module also includes functions for plotting the response spectra and time series\n2) smtk.intensity_measures \nThis module contains a set of functions for deriving different intensity measures from a strong motion record\ni) get_peak_measures(...) - returns PGA, PGV and PGD\nii) get_response_spectrum(...) - returns the response spectrum\niii) get_response_spectrum_pair(...) - returns a response spectrum pair\niv) geometric_mean_spectrum(...) - returns the geometric mean of a pair of records\nv) arithmetic_mean_spectrum(...) - returns the arithmetic mean of a pair of records\nvi) geometric_mean_spectrum(...) - returns the envelope spectrum of a pair of records\nvii) larger_pga(...) - Returns the spectrum with the larger PGA\nviii) rotate_horizontal(...) - rotates a record pair through angle theta\nix) gmrotdpp(...) - Returns the rotationally-dependent geometric fractile (pp) of a pair of records\nx) gmrotipp(...) - Returns the rotationally-independent geometric fractile (pp) of a pair of records\nExample Usage of the Response Spectrum",
"# Import modules\n%matplotlib inline\nimport numpy as np # Numerical Python package\nimport matplotlib.pyplot as plt # Python plotting package\n# Import\nimport smtk.response_spectrum as rsp # Response Spectra tools\nimport smtk.intensity_measures as ims # Intensity Measure Tools\n\n\nperiods = np.array([0.01, 0.02, 0.03, 0.04, 0.05, 0.075, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19,\n 0.20, 0.22, 0.24, 0.26, 0.28, 0.30, 0.32, 0.34, 0.36, 0.38, 0.40, 0.42, 0.44, 0.46, 0.48, 0.5, \n 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, \n 1.9, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6, 3.8, 4.0, 4.2, 4.4, 4.6, 4.8, 5.0, 5.5, 6.0, \n 6.5, 7.0,7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)\nnumber_periods = len(periods)\n\n# Load record pair from files\nx_record = np.genfromtxt(\"data/sm_record_x.txt\")\ny_record = np.genfromtxt(\"data/sm_record_y.txt\")\n\nx_time_step = 0.002 # Record sampled at 0.002 s \ny_time_step = 0.002",
"Get Response Spectrum - Nigam & Jennings",
"# Create an instance of the Newmark-Beta class\nnigam_jennings = rsp.NigamJennings(x_record, x_time_step, periods, damping=0.05, units=\"cm/s/s\")\nsax, time_series, acc, vel, dis = nigam_jennings.evaluate()\n\n# Plot Response Spectrum\nrsp.plot_response_spectra(sax, axis_type=\"semilogx\", filename=\"images/response_nigam_jennings.pdf\",filetype=\"pdf\")",
"Plot Time Series",
"rsp.plot_time_series(time_series[\"Acceleration\"],\n x_time_step,\n time_series[\"Velocity\"],\n time_series[\"Displacement\"])",
"Intensity Measures\nGet PGA, PGV and PGD",
"pga_x, pgv_x, pgd_x, _, _ = ims.get_peak_measures(0.002, x_record, True, True)\nprint \"PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm\" % (pga_x, pgv_x, pgd_x)\npga_y, pgv_y, pgd_y, _, _ = ims.get_peak_measures(0.002, y_record, True, True)\nprint \"PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm\" % (pga_y, pgv_y, pgd_y)",
"Get Durations: Bracketed, Uniform, Significant",
"print \"Bracketed Duration (> 5 cm/s/s) = %9.3f s\" % ims.get_bracketed_duration(x_record, x_time_step, 5.0)\nprint \"Uniform Duration (> 5 cm/s/s) = %9.3f s\" % ims.get_uniform_duration(x_record, x_time_step, 5.0)\nprint \"Significant Duration (5 - 95 Arias ) = %9.3f s\" % ims.get_significant_duration(x_record, x_time_step, 0.05, 0.95)",
"Get Arias Intensity, CAV, CAV5 and rms acceleration",
"print \"Arias Intensity = %12.4f cm-s\" % ims.get_arias_intensity(x_record, x_time_step)\nprint \"Arias Intensity (5 - 95) = %12.4f cm-s\" % ims.get_arias_intensity(x_record, x_time_step, 0.05, 0.95)\nprint \"CAV = %12.4f cm-s\" % ims.get_cav(x_record, x_time_step)\nprint \"CAV5 = %12.4f cm-s\" % ims.get_cav(x_record, x_time_step, threshold=5.0)\nprint \"Arms = %12.4f cm-s\" % ims.get_arms(x_record, x_time_step)",
"Spectrum Intensities: Housner Intensity, Acceleration Spectrum Intensity",
"# Get response spectrum\nsax = ims.get_response_spectrum(x_record, x_time_step, periods)[0]\nprint \"Velocity Spectrum Intensity (cm/s/s) = %12.5f\" % ims.get_response_spectrum_intensity(sax)\nprint \"Acceleration Spectrum Intensity (cm-s) = %12.5f\" % ims.get_acceleration_spectrum_intensity(sax)\n",
"Get the response spectrum pair from two records",
"sax, say = ims.get_response_spectrum_pair(x_record, x_time_step,\n y_record, y_time_step,\n periods,\n damping=0.05,\n units=\"cm/s/s\",\n method=\"Nigam-Jennings\")\n",
"Get Geometric Mean Spectrum",
"sa_gm = ims.geometric_mean_spectrum(sax, say)\nrsp.plot_response_spectra(sa_gm, \"semilogx\", filename=\"images/geometric_mean_spectrum.pdf\", filetype=\"pdf\")",
"Get Envelope Spectrum",
"sa_env = ims.envelope_spectrum(sax, say)\nrsp.plot_response_spectra(sa_env, \"semilogx\", filename=\"images/envelope_spectrum.pdf\", filetype=\"pdf\")",
"Rotationally Dependent and Independent IMs\nGMRotD50 and GMRotI50",
"gmrotd50 = ims.gmrotdpp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0,\n damping=0.05, units=\"cm/s/s\")\ngmroti50 = ims.gmrotipp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0,\n damping=0.05, units=\"cm/s/s\")\n\n# Plot all of the rotational angles!\nplt.figure(figsize=(8, 6))\nfor row in gmrotd50[\"GeoMeanPerAngle\"]:\n plt.semilogx(periods, row, \"-\", color=\"LightGray\")\nplt.semilogx(periods, gmrotd50[\"GMRotDpp\"], 'b-', linewidth=2, label=\"GMRotD50\")\nplt.semilogx(periods, gmroti50[\"Pseudo-Acceleration\"], 'r-', linewidth=2, label=\"GMRotI50\")\nplt.xlabel(\"Period (s)\", fontsize=18)\nplt.ylabel(\"Acceleration (cm/s/s)\", fontsize=18)\nplt.legend(loc=0)\nplt.savefig(\"images/rotational_spectra.pdf\", dpi=300, format=\"pdf\")\n",
"Fourier Spectra, Smoothing and HVSR\nShow the Fourier Spectrum",
"ims.plot_fourier_spectrum(x_record, x_time_step,\n filename=\"images/fourier_spectrum.pdf\", filetype=\"pdf\")",
"Smooth the Fourier Spectrum Using the Konno & Omachi (1998) Method",
"from smtk.smoothing.konno_ohmachi import KonnoOhmachi\n# Get the original Fourier spectrum\nfreq, amplitude = ims.get_fourier_spectrum(x_record, x_time_step)\n\n# Configure Smoothing Parameters\nsmoothing_config = {\"bandwidth\": 40, # Size of smoothing window (lower = more smoothing)\n \"count\": 1, # Number of times to apply smoothing (may be more for noisy records) \n \"normalize\": True} \n\n# Apply the Smoothing\nsmoother = KonnoOhmachi(smoothing_config)\nsmoothed_spectra = smoother.apply_smoothing(amplitude, freq)\n\n# Compare the Two Spectra\nplt.figure(figsize=(7,5))\nplt.loglog(freq, amplitude, \"k-\", lw=1.0,label=\"Original\")\nplt.loglog(freq, smoothed_spectra, \"r\", lw=2.0, label=\"Smoothed\")\nplt.xlabel(\"Frequency (Hz)\", fontsize=14)\nplt.xlim(0.05, 200)\nplt.ylabel(\"Fourier Amplitude\", fontsize=14)\nplt.tick_params(labelsize=12)\nplt.legend(loc=0, fontsize=14)\nplt.grid(True)\nplt.savefig(\"images/SmoothedFourierSpectra.pdf\", format=\"pdf\", dpi=300)",
"Get the HVSR\nLoad in the Time Series",
"# Load in a three component data set\nrecord_file = \"data/record_3component.csv\"\nrecord_3comp = np.genfromtxt(record_file, delimiter=\",\")\n\ntime_vector = record_3comp[:, 0]\nx_record = record_3comp[:, 1]\ny_record = record_3comp[:, 2]\nv_record = record_3comp[:, 3]\ntime_step = 0.002\n\n# Plot the records\nfig = plt.figure(figsize=(8,12))\nfig.set_tight_layout(True)\nax = plt.subplot(311)\nax.plot(time_vector, x_record)\nax.set_ylim(-80., 80.)\nax.set_xlim(0., 10.5)\nax.grid(True)\nax.set_xlabel(\"Time (s)\", fontsize=14)\nax.set_ylabel(\"Acceleration (cm/s/s)\", fontsize=14)\nax.tick_params(labelsize=12)\nax.set_title(\"EW\", fontsize=16)\nax = plt.subplot(312)\nax.plot(time_vector, y_record)\nax.set_xlim(0., 10.5)\nax.set_ylim(-80., 80.)\nax.grid(True)\nax.set_xlabel(\"Time (s)\", fontsize=14)\nax.set_ylabel(\"Acceleration (cm/s/s)\", fontsize=14)\nax.set_title(\"NS\", fontsize=16)\nax.tick_params(labelsize=12)\nax = plt.subplot(313)\nax.plot(time_vector, v_record)\nax.set_xlim(0., 10.5)\nax.set_ylim(-40., 40.)\nax.grid(True)\nax.set_xlabel(\"Time (s)\", fontsize=14)\nax.set_ylabel(\"Acceleration (cm/s/s)\", fontsize=14)\nax.set_title(\"Vertical\", fontsize=16)\nax.tick_params(labelsize=12)\nplt.savefig(\"images/3component_timeseries.pdf\", format=\"pdf\", dpi=300)",
"Look at the Fourier Spectra",
"x_freq, x_four = ims.get_fourier_spectrum(x_record, time_step)\ny_freq, y_four = ims.get_fourier_spectrum(y_record, time_step)\nv_freq, v_four = ims.get_fourier_spectrum(v_record, time_step)\nplt.figure(figsize=(7, 5))\nplt.loglog(x_freq, x_four, \"k-\", lw=1.0, label=\"EW\")\nplt.loglog(y_freq, y_four, \"b-\", lw=1.0, label=\"NS\")\nplt.loglog(v_freq, v_four, \"r-\", lw=1.0, label=\"V\")\nplt.xlim(0.05, 200.)\nplt.tick_params(labelsize=12)\nplt.grid(True)\nplt.xlabel(\"Frequency (Hz)\", fontsize=16)\nplt.ylabel(\"Fourier Amplitude\", fontsize=16)\nplt.legend(loc=3, fontsize=16)\nplt.savefig(\"images/3component_fas.pdf\", format=\"pdf\", dpi=300)",
"Calculate the Horizontal To Vertical Spectral Ratio",
"# Setup parameters\nparams = {\"Function\": \"KonnoOhmachi\",\n \"bandwidth\": 40.0,\n \"count\": 1.0,\n \"normalize\": True\n }\n# Returns\n# 1. Horizontal to Vertical Spectral Ratio\n# 2. Frequency\n# 3. Maximum H/V\n# 4. Period of Maximum H/V\nhvsr, freq, max_hv, t_0 = ims.get_hvsr(x_record, time_step, y_record, time_step, v_record, time_step, params)\n\nplt.figure(figsize=(7,5))\nplt.semilogx(freq, hvsr, 'k-', lw=2.0)\n# Show T0\nt_0_line = np.array([[t_0, 0.0],\n [t_0, 1.1 * max_hv]])\nplt.semilogx(1.0 / t_0_line[:, 0], t_0_line[:, 1], \"r--\", lw=1.5)\nplt.xlabel(\"Frequency (Hz)\", fontsize=14)\nplt.ylabel(\"H / V\", fontsize=14)\nplt.tick_params(labelsize=14)\nplt.xlim(0.1, 10.0)\nplt.grid(True)\nplt.title(r\"$T_0 = %.4f s$\" % t_0, fontsize=16)\nplt.savefig(\"images/hvsr_example1.pdf\", format=\"pdf\", dpi=300)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
scraperwiki/databaker | databaker/tutorial/Finding_your_way.ipynb | agpl-3.0 | [
"Opening and previewing\nThis uses the tiny excel spreadsheet example1.xls. It is small enough to preview inline in this notebook. But for bigger spreadsheet tables you will want to open them up in a separate window.",
"\n# Load in the functions\nfrom databaker.framework import *\n\n# Load the spreadsheet\ntabs = loadxlstabs(\"example1.xls\")\n\n# Select the first table\ntab = tabs[0]\n\nprint(\"The unordered bag of cells for this table looks like:\")\nprint(tab)",
"Selecting cell bags\nA table is also \"bag of cells\", which just so happens to be a set of all the cells in the table. \nA \"bag of cells\" is like a Python set (and looks like one when you print it), but it has extra selection functions that help you navigate around the table.\nWe will learn these as we go along, but you can see the full list on the tutorial_reference notebook.",
"# Preview the table as a table inline\nsavepreviewhtml(tab)\n\nbb = tab.is_bold()\nprint(\"The cells with bold font are\", bb)\n\nprint(\"The\", len(bb), \"cells immediately below these bold font cells are\", bb.shift(DOWN))\n\ncc = tab.filter(\"Cars\")\nprint(\"The single cell with the text 'Cars' is\", cc)\n\ncc.assert_one() # proves there is only one cell in this bag\n\nprint(\"Everything in the column below the 'Cars' cell is\", cc.fill(DOWN))\n\nhcc = tab.filter(\"Cars\").expand(DOWN)\nprint(\"If you wanted to include the 'Cars' heading, then use expand\", hcc)\n\nprint(\"You can print the cells in row-column order if you don't mind unfriendly code\")\nshcc = sorted(hcc.unordered_cells, key=lambda Cell:(Cell.y, Cell.x))\nprint(shcc)\n\nprint(\"It can be easier to see the set of cells coloured within the table\")\nsavepreviewhtml(hcc)",
"Note: As you work through this tutorial, do please feel free to temporarily insert new Jupyter-Cells in order to give yourself a place to experiment with any of the functions that are available. (Remember, the value of the last line in a Jupyter-Cell is always printed out -- in addition to any earlier print-statements.)",
"\"All the cells that have an 'o' in them:\", tab.regex(\".*?o\")",
"Observations and dimensions\nLet's get on with some actual work. In our terminology, an \"Observation\" is a numerical measure (eg anything in the 3x4 array of numbers in the example table), and a \"Dimension\" is one of the headings.\nBoth are made up of a bag of cells, however a Dimension also needs to know how to \"look up\" from the Observation to its dimensional value.",
"# We get the array of observations by selecting its corner and expanding down and to the right\nobs = tab.excel_ref('B4').expand(DOWN).expand(RIGHT)\nsavepreviewhtml(obs)\n\n# the two main headings are in a row and a column\nr1 = tab.excel_ref('B3').expand(RIGHT)\nr2 = tab.excel_ref('A3').fill(DOWN)\n\n# here we pass in a list containing two cell bags and get two colours\nsavepreviewhtml([r1, r2])\n\n\n# HDim is made from a bag of cells, a name, and an instruction on how to look it up \n# from an observation cell. \nh1 = HDim(r1, \"Vehicles\", DIRECTLY, ABOVE)\n\n# Here is an example cell\ncc = tab.excel_ref('C5')\n\n# You can preview a dimension as well as just a cell bag\nsavepreviewhtml([h1, cc])\n\n\n# !!! This is the important look-up stage from a cell into a dimension\nprint(\"Cell\", cc, \"matches\", h1.cellvalobs(cc), \"in dimension\", h1.label)\n\n\n# You can start to see through to the final result of all this work when you \n# print out the lookup values for every observation in the table at once. \nfor ob in obs:\n print(\"Obs\", ob, \"maps to\", h1.cellvalobs(ob))",
"Note the value of h1.cellvalobs(ob) is actually a pair composed of the heading cell and its value. This is is because we can over-ride its output value without actually rewriting the original table, as we shall see.",
"# You can change an output value like this:\nh1.AddCellValueOverride(\"Cars\", \"Horses\")\n\nfor ob in obs:\n print(\"Obs\", ob, \"maps to\", h1.cellvalobs(ob))\n\n# Alternatively, you can override by the reference to a single cell to a value \n# (This will work even if the cell C3 is empty, which helps with filling in blank headings)\nh1.AddCellValueOverride(tab.excel_ref('C3'), \"Submarines\")\nfor ob in obs:\n print(\"Obs\", ob, \"maps to\", h1.cellvalobs(ob))\n\n# You can override the header value for an individual observation element. \nb4cell = tab.excel_ref('B4')\nh1.AddCellValueOverride(b4cell, \"Clouds\")\nfor ob in obs:\n print(\"Obs\", ob, \"maps to\", h1.cellvalobs(ob))\n\n# The preview table shows how things have changed\nsavepreviewhtml([h1, obs])\n\nwob = tab.excel_ref('A1')\nprint(\"Wrong-Obs\", wob, \"maps to\", h1.cellvalobs(wob), \" <--- ie Nothing\")\n\n\nh1.AddCellValueOverride(None, \"Who knows?\")\nprint(\"After giving a default value Wrong-Obs\", wob, \"now maps to\", h1.cellvalobs(wob))\n\n# The default even works if the cell bag set is empty. In which case we have a special \n# constant case that maps every observation to the same value\nh3 = HDimConst(\"Category\", \"Beatles\")\nfor ob in obs:\n print(\"Obs\", ob, \"maps to\", h3.cellvalobs(ob))",
"Conversion segments and output\nA ConversionSegment is a collection of Dimensions with an Observation set that is going to be processed and output as a table all at once.\nYou can preview them in HTML (just like the cell bags and dimensions), only this time the observation cells can be clicked on interactively to show how they look up.",
"\ndimensions = [ \n HDim(tab.excel_ref('B1'), TIME, CLOSEST, ABOVE), \n HDim(r1, \"Vehicles\", DIRECTLY, ABOVE), \n HDim(r2, \"Name\", DIRECTLY, LEFT), \n HDimConst(\"Category\", \"Beatles\")\n]\n\nc1 = ConversionSegment(obs, dimensions, processTIMEUNIT=False)\nsavepreviewhtml(c1)\n\n\n# If the table is too big, we can preview it in another file is openable in another browser window.\n# (It's very useful if you are using two computer screens.)\nsavepreviewhtml(c1, \"preview.html\", verbose=False)\n\nprint(\"Looking up all the observations against all the dimensions and print them out\")\nfor ob in c1.segment:\n print(c1.lookupobs(ob))\n\ndf = c1.topandas()\ndf",
"WDA Technical CSV\nThe ONS uses their own data system for publishing their time-series data known as WDA. \nIf you need to output to it, then this next section is for you. \nThe function which outputs to the WDA format is writetechnicalCSV(filename, [conversionsegments]) The format is very verbose because it repeats each dimension name and its value twice in each row, and every row begins with the following list of column entries, whether or not they exist.\n\nobservation, data_marking, statistical_unit_eng, statistical_unit_cym, measure_type_eng, measure_type_cym, observation_type, obs_type_value, unit_multiplier, unit_of_measure_eng, unit_of_measure_cym, confidentuality, geographic_area\n\nThe writetechnicalCSV() function accepts a single conversion segment, a list of conversion segments, or equivalently a pandas dataframe.",
"print(writetechnicalCSV(None, c1))\n\n# This is how to write to a file\nwritetechnicalCSV(\"exampleWDA.csv\", c1)\n\n# We can read this file back in to a list of pandas dataframes\ndfs = readtechnicalCSV(\"exampleWDA.csv\")\nprint(dfs[0])\n",
"Note If you were wondering what the processTIMEUNIT=False was all about in the ConversionSegment constructor, it's a feature to help the WDA output automatically set the TIMEUNIT column according to whether it should be Year, Month, or Quarter.\nYou will note that the TIME column above is 2014.0 when it really should be 2014 with the TIMEUNIT set to Year.\nBy setting it to True the ConversionSegment object will identify the timeunit from the value of the TIME column and then force its format to conform.",
"# See that the `2014` no longer ends with `.0`\nc1 = ConversionSegment(obs, dimensions, processTIMEUNIT=True)\nc1.topandas()\n",
"Note Sometimes the TIME value needs to be created by joining two or more other cells (eg one is a month, and the other is the year). \nSuch an operation can much more easily be done using the pandas column operations than by using the concept of subdimensions which used to exist in Databaker before we took it out. \nThis will be explained in a later worked example."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NazBen/impact-of-dependence | notebooks/grid-search.ipynb | mit | [
"Conservative Estimation using a Grid Seach Minimization\nThis notebook illustrates the different steps for a conservative estimation using a grid search minimization.\nClassic Libraries",
"import openturns as ot\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nrandom_state = 123\nnp.random.seed(random_state)",
"Additive model\nThe first example of conservative estimation consider an additive model $\\eta : \\mathbb R^d \\rightarrow \\mathbb R$ with Gaussian margins. The objectives are to estimate a quantity of interest $\\mathcal C(Y)$ of the model output distribution. Unfortunately, the dependence structure is unknown. In order to be conservative we aim to give bounds to $\\mathcal C(Y)$.\nThe model\nThis example consider the simple additive example.",
"from depimpact.tests import func_sum\nhelp(func_sum)",
"Dimension 2\nWe consider the problem in dimension $d=2$ and a number of pairs $p=1$ for gaussian margins.",
"dim = 2\nmargins = [ot.Normal()]*dim",
"Copula families\nWe consider a gaussian copula for this first example",
"families = np.zeros((dim, dim), dtype=int)\nfamilies[1, 0] = 1",
"Estimations\nWe create an instance of the main class for a conservative estimate.",
"from depimpact import ConservativeEstimate\n\nquant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)",
"First, we compute the quantile at independence",
"n = 1000\nindep_result = quant_estimate.independence(n_input_sample=n, random_state=random_state)",
"We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\\alpha$ to a function that computes the empirical quantile from a given sample.",
"from depimpact import quantile_func\nalpha = 0.05\nq_func = quantile_func(alpha)\nindep_result.q_func = q_func",
"The computation returns a DependenceResult instance. This object gather the informations of the computation. It also computes the output quantity of interest (which can also be changed).",
"sns.jointplot(indep_result.input_sample[:, 0], indep_result.input_sample[:, 1]);\n\nh = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label=\"Output Distribution\")\nplt.plot([indep_result.quantity]*2, h.get_ylim(), label='Quantile at %d%%' % (alpha*100))\nplt.legend(loc=0)\nprint('Output quantile :', indep_result.quantity)",
"A boostrap can be done on the output quantity",
"indep_result.compute_bootstrap(n_bootstrap=5000)",
"And we can plot it",
"sns.distplot(indep_result.bootstrap_sample, axlabel='Output quantile');\n\nci = [0.025, 0.975]\nquantity_ci = indep_result.compute_quantity_bootstrap_ci(ci)\n\nh = sns.distplot(indep_result.output_sample_id, axlabel='Model output', label=\"Output Distribution\")\nplt.plot([indep_result.quantity]*2, h.get_ylim(), 'g-', label='Quantile at %d%%' % (alpha*100))\nplt.plot([quantity_ci[0]]*2, h.get_ylim(), 'g--', label='%d%% confidence intervals' % ((1. - (ci[0] + 1. - ci[1]))*100))\nplt.plot([quantity_ci[1]]*2, h.get_ylim(), 'g--')\nplt.legend(loc=0)\nprint('Quantile at independence: %.2f with a C.O.V at %.1f %%' % (indep_result.boot_mean, indep_result.boot_cov))",
"Grid Search Approach\nFirstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space.",
"K = 20\nn = 10000\ngrid_type = 'lhs'\ndep_measure = 'parameter'\ngrid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, dep_measure=dep_measure, \n random_state=random_state)",
"The computation returns a ListDependenceResult which is a list of DependenceResult instances and some bonuses.",
"print('The computation did %d model evaluations.' % (grid_result.n_evals))",
"Lets set the quantity function and search for the minimum among the grid results.",
"grid_result.q_func = q_func\nmin_result = grid_result.min_result\nprint('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))",
"We can plot the result in grid results. The below figure shows the output quantiles in function of the dependence parameters.",
"plt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')\nplt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')\nplt.xlabel('Dependence parameter')\nplt.ylabel('Quantile value')\nplt.legend(loc=0);",
"As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap.",
"grid_result.compute_bootstraps(n_bootstrap=500)\nboot_min_quantiles = grid_result.bootstrap_samples.min(axis=0)\nboot_argmin_quantiles = grid_result.bootstrap_samples.argmin(axis=0).ravel().tolist()\nboot_min_params = [grid_result.dep_params[idx][0] for idx in boot_argmin_quantiles]\n\nfig, axes = plt.subplots(1, 2, figsize=(14, 5))\nsns.distplot(boot_min_quantiles, axlabel=\"Minimum quantiles\", ax=axes[0])\nsns.distplot(boot_min_params, axlabel=\"Parameters of the minimum\", ax=axes[1])",
"For the parameter that have the most occurence for the minimum, we compute its bootstrap mean.",
" # The parameter with most occurence\nboot_id_min = max(set(boot_argmin_quantiles), key=boot_argmin_quantiles.count)\nboot_min_result = grid_result[boot_id_min]\n\nboot_mean = boot_min_result.bootstrap_sample.mean()\nboot_std = boot_min_result.bootstrap_sample.std()\nprint('Worst Quantile: {} at {} with a C.O.V of {} %'.format(boot_min_result.boot_mean, min_result.dep_param, boot_min_result.boot_cov*100.))",
"Kendall's Tau\nAn interesting feature is to convert the dependence parameters to Kendall's Tau values.",
"plt.plot(grid_result.kendalls, grid_result.quantities, '.', label='Quantiles')\nplt.plot(min_result.kendall_tau, min_result.quantity, 'ro', label='Minimum quantile')\nplt.xlabel(\"Kendall's tau\")\nplt.ylabel('Quantile')\nplt.legend(loc=0);",
"As we can see, the bounds\nWith bounds on the dependencies\nAn interesting option in the ConservativeEstimate class is to bound the dependencies, due to some prior informations.",
"bounds_tau = np.asarray([[0., 0.7], [0.1, 0.]])\nquant_estimate.bounds_tau = bounds_tau\nK = 20\nn = 10000\ngrid_type = 'lhs'\ngrid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)\n\ngrid_result.q_func = q_func\nmin_result = grid_result.min_result\nprint('Minimum quantile: {} at param: {}'.format(min_result.quantity, min_result.dep_param))\n\nplt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')\nplt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='minimum')\nplt.xlabel('Dependence parameter')\nplt.ylabel('Quantile value')\nplt.legend(loc=0);",
"Saving the results\nIt is usefull to save the result in a file to load it later and compute other quantities or anything you need!",
"filename = './result.hdf'\ngrid_result.to_hdf(filename)\n\nfrom dependence import ListDependenceResult\nload_grid_result = ListDependenceResult.from_hdf(filename, q_func=q_func, with_input_sample=False)\n\nnp.testing.assert_array_equal(grid_result.output_samples, load_grid_result.output_samples)\n\nimport os\nos.remove(filename)",
"Taking the extreme values of the dependence parameter\nIf the output quantity of interest seems to have a monotonicity with the dependence parameter, it is better to directly take the bounds of the dependence problem. Obviously, the minimum should be at the edges of the design space",
"K = None\nn = 1000\ngrid_type = 'vertices'\ngrid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)\n\ngrid_result.q_func = q_func\nprint(\"Kendall's Tau : {}, Quantile: {}\".format(grid_result.kendalls.ravel(), grid_result.quantities))\n\nfrom depimpact.plots import matrix_plot_input\nmatrix_plot_input(grid_result.min_result);",
"Higher Dimension\nWe consider the problem in dimension $d=5$.",
"dim = 5\nquant_estimate.margins = [ot.Normal()]*dim",
"Copula families with one dependent pair\nWe consider a gaussian copula for this first example, but for the moment only one pair is dependent.",
"families = np.zeros((dim, dim), dtype=int)\nfamilies[2, 0] = 1\nquant_estimate.families = families\nfamilies\n\nquant_estimate.bounds_tau = None\nquant_estimate.bounds_tau",
"We reset the families and bounds for the current instance. (I don't want to create a new instance, just to check if the setters are good).",
"quant_estimate.vine_structure",
"Let's do the grid search to see",
"K = 20\nn = 10000\ngrid_type = 'vertices'\ngrid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)",
"The quantile is lower compare to the problem of dimension 1. Indeed, there is more variables, more uncertainty, so a larger deviation of the output.",
"grid_result.q_func = q_func\nmin_result = grid_result.min_result\nprint('Worst Quantile: {} at {}'.format(min_result.quantity, min_result.dep_param))\nmatrix_plot_input(min_result)\n\nplt.plot(grid_result.dep_params, grid_result.quantities, '.', label='Quantiles')\nplt.plot(min_result.dep_param[0], min_result.quantity, 'ro', label='Minimum')\nplt.xlabel('Dependence parameter')\nplt.ylabel('Quantile value')\nplt.legend(loc=0);",
"Copula families with all dependent pairs\nWe consider a gaussian copula for this first example, but for the moment only one pair is dependent.",
"families = np.zeros((dim, dim), dtype=int)\nfor i in range(1, dim):\n for j in range(i):\n families[i, j] = 1\n\nquant_estimate.margins = margins\nquant_estimate.families = families\nquant_estimate.vine_structure = None\nquant_estimate.bounds_tau = None\nquant_estimate.bounds_tau\n\nK = 100\nn = 1000\ngrid_type = 'lhs'\ngrid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, random_state=random_state)\n\nmin_result = grid_result.min_result\nprint('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param))",
"With one fixed pair",
"families[3, 2] = 0\nquant_estimate = ConservativeEstimate(model_func=func_sum, margins=margins, families=families)\n\nK = 100\nn = 10000\ngrid_type = 'lhs'\ngrid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, \n q_func=q_func, random_state=random_state)\n\nmin_result = grid_result.min_result\nprint('Worst Quantile: {0} at {1}'.format(min_result.quantity, min_result.dep_param))\n\ngrid_result.vine_structure\n\nfrom depimpact.plots import matrix_plot_input\n\nmatrix_plot_input(min_result)",
"Save the used grid and load it again",
"K = 100\nn = 1000\ngrid_type = 'lhs'\ngrid_result_1 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, save_grid=True, grid_path='./output')\n\ngrid_result_2 = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, \n q_func=q_func, use_grid=0, grid_path='./output')",
"Then gather the results from the same grid with the same configurations",
"grid_result_1.n_input_sample, grid_result_2.n_input_sample\n\ngrid_result = grid_result_1 + grid_result_2",
"Because the configurations are the same, we can gather the results from two different runs",
"grid_result.n_input_sample"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.